• 제목/요약/키워드: Facial Color Model

검색결과 71건 처리시간 0.029초

Multiple Face Segmentation and Tracking Based on Robust Hausdorff Distance Matching

  • Park, Chang-Woo;Kim, Young-Ouk;Sung, Ha-Gyeong;Park, Mignon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제3권1호
    • /
    • pp.87-92
    • /
    • 2003
  • This paper describes a system for tracking multiple faces in an input video sequence using facial convex hull based facial segmentation and robust hausdorff distance. The algorithm adapts skin color reference map in YCbCr color space and hair color reference map in RGB color space for classifying face region. Then, we obtain an initial face model with preprocessing and convex hull. For tracking, this algorithm computes displacement of the point set between frames using a robust hausdorff distance and the best possible displacement is selected. Finally, the initial face model is updated using the displacement. We provide an example to illustrate the proposed tracking algorithm, which efficiently tracks rotating and zooming faces as well as existing multiple faces in video sequences obtained from CCD camera.

Emotion Detection Algorithm Using Frontal Face Image

  • Kim, Moon-Hwan;Joo, Young-Hoon;Park, Jin-Bae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.2373-2378
    • /
    • 2005
  • An emotion detection algorithm using frontal facial image is presented in this paper. The algorithm is composed of three main stages: image processing stage and facial feature extraction stage, and emotion detection stage. In image processing stage, the face region and facial component is extracted by using fuzzy color filter, virtual face model, and histogram analysis method. The features for emotion detection are extracted from facial component in facial feature extraction stage. In emotion detection stage, the fuzzy classifier is adopted to recognize emotion from extracted features. It is shown by experiment results that the proposed algorithm can detect emotion well.

  • PDF

컬러 정보를 이용한 실시간 표정 데이터 추적 시스템 (Realtime Facial Expression Data Tracking System using Color Information)

  • 이윤정;김영봉
    • 한국콘텐츠학회논문지
    • /
    • 제9권7호
    • /
    • pp.159-170
    • /
    • 2009
  • 온라인 기반의 3차원 얼굴 애니메이션을 위해서 실시간으로 얼굴을 캡처하고 표정 데이터를 추출하는 것은 매우 중요한 작업이다. 최근 동영상 입력을 통해 연기자의 표정을 캡처하고 그것을 그대로 3차원 얼굴 모델에 표현하는 비전 기반(vision-based) 방법들에 대한 연구가 활발히 이루어지고 있다. 본 논문 에서는 실시간으로 입력되는 동영상으로부터 얼굴과 얼굴 특징점들을 자동으로 검출하고 이를 추적하는 시스템을 제안한다. 제안 시스템은 얼굴 검출과 얼굴 특징점 추출 및 추적과정으로 구성된다. 얼굴 검출은 3차원 YCbCr 피부 색상 모델을 이용하여 피부 영역을 분리하고 Harr 기반 검출기를 이용해 얼굴 여부를 판단한다. 얼굴 표정에 영향을 주는 눈과 입 영역의 검출은 밝기 정보와 특정 영역의 고유한 색상 정보를 이용한다. 검출된 눈과 입 영역에서 MPEG-4에서 정의한 FAP를 기준으로 10개의 특징점을 추출하고, 컬러 확률 분포의 추적을 통해 연속 프레임에서 특징점들의 변위를 구한다 실험 결과 제안 시스템 은 약 초당 8 프레임으로 표정 데이터를 추적하였다.

외부광 차단을 위한 설진기 안면접촉부 설계 (Structural Design of Facial Contact Parts in Computerized Tongue Diagnosis System to Block Out External Light)

  • 김지혜;남동현
    • 대한한의진단학회지
    • /
    • 제17권3호
    • /
    • pp.225-232
    • /
    • 2013
  • Objectives The aim of this study is to design a part in contact with the face of computerized tongue diagnosis system (CTDS), so that external light is effectively shielded even if the facial appearance and degree of protrusion differ when a patient opens or closes his/her jaws. Methods Each of the 4 researchers manually produced clay models of the part in contact with the face of CTDS. Shielding and contact feeling of the clay models were evaluated by 20 assessors. Based on the evaluation, we selected the appropriate model and produced the final silicon model. Then we evaluated the performance of the shielding of the completed silicon model. We took tongue pictures of 60 participants with a CTDS applying the silicon model in condition with external light and without it. The color values in RGB color model and gray scale of the tongue pictures in condition with external light were compared with those without external light. Results There was no significant difference between the color values of the picture taken in condition with external light and those without external light. Conclusions We concluded that the produced part in contact with the face of CTDS can effectively block out the external light.

메이크업 색채활용시스템 개발을 위한 화장색 이미지 지각 및 선호도 연구 - 20대 여성 모델을 중심으로 - (A Study on the Differences of Make-up Color Perception and Preference for the Development of Make-up Color System - Focused on a Female Model in Her Twenties -)

  • 이연희
    • 복식문화연구
    • /
    • 제13권5호
    • /
    • pp.712-728
    • /
    • 2005
  • This study consists of the stimuli of a female model in her twenties with twenty-three different facial make-up and survey on the differences of them for the development of make-up color system, based on the color-sense on the Korean's skin-tone and make-up color, to enforce the efficiency of beauty education. The result of this study and the suggestion is as followed. Firstly, Familiarity, Intelligence, Fitness, Charm, Tradition and Youth were came out as the result of factor analysis of make-up color image perception. Secondly, the stimulus of bare face was evaluated as more familiar and intelligent than the one with image make-up but perceived as unhealthy and not untraditional. Thirdly, skin tone had a big impact on both in lip color that's been applied in monotonous make-up and in image make-up that had been applied in contrastive make-up. Through these results, it is confirmed that the skin tone and make-up colors were influential variables in the research on facial image perception and preference against a female model in her 20s, and also the image test and preference can be changed according to the color contrasts. This research will be used as a basic tool for the development of make-up color applying system with image perception of statics of population variables and preference research. Also it aims to suggest the alternatives to perform the present collage make-up education for more systematic and organized education.

  • PDF

얼굴 특징영역상의 광류를 이용한 표정 인식 (Recognition of Hmm Facial Expressions using Optical Flow of Feature Regions)

  • 이미애;박기수
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제32권6호
    • /
    • pp.570-579
    • /
    • 2005
  • 표정인식 연구는 맨$\cdot$머신 인터페이스 개발, 개인 식별, 가상모델에 의한 표정복원 등 응용가치의 무한한 가능성과 함께 다양한 분야에서 연구되고 있다 본 논문에서는 인간의 기본정서 중 행복, 분노, 놀람, 슬픔에 대한 4가지 표정을 얼굴의 강체 움직임이 없는 얼굴동영상으로부터 간단히 표정인식 할 수 있는 방법을 제안한다. 먼저, 얼굴 및 표정을 결정하는 요소들과 각 요소의 특징영역들을 색상, 크기 그리고 위치정보를 이용하여 자동으로 검출한다. 다음으로 Gradient Method를 이용하여 추정한 광류 값으로 특징영역들에 대한 방향패턴을 결정한 후, 본 연구가 제안한 방향모델을 이용하여 방향패턴에 대한 매칭을 행한다. 각 정서를 대표하는 방향모델과의 패턴 매칭에서 그 조합 값이 최소를 나타내는 부분이 가장 유사한 정서임을 판단하고 표정인식을 행한다. 마지막으로 실험을 통하여 본 논문의 유효성을 확인한다.

가버 필터와 밀도 기반 공간 클러스터링을 이용한 피부의 이상 영역 검출 (Detection of Abnormal Region of Skin using Gabor Filter and Density-based Spatial Clustering of Applications with Noise)

  • 전민성;최경주
    • 한국멀티미디어학회논문지
    • /
    • 제21권2호
    • /
    • pp.117-129
    • /
    • 2018
  • In this paper, we suggest a new system that detects abnormal region of skim. First, an illumination elimination algorithm which uses LAB color model is processed on input facial image to obtain robust facial image for illumination, and then gabor filter is processed to detect the reactivity of discontinuity. And last, the density-based spatial clustering of applications with noise(DBSCAN) algorithm is processed to classify areas of wrinkles, dots, and other skin diseases. This method allows the user to check the skin condition of the images taken in real life.

활성 윤곽선 모델을 이용한 얼굴 경계선 추출 (Facial Boundary Detection using an Active Contour Model)

  • 장재식;김은이;김항준
    • 전자공학회논문지CI
    • /
    • 제42권1호
    • /
    • pp.79-87
    • /
    • 2005
  • 본 논문에서는 복잡한 환경에서 정확한 얼굴영역의 경계를 추출하기 위한 활성 윤곽선 모델(Active Contour Model)을 제안한다. 제안된 모델에서 윤곽선은 레벨 함수 φ의 제로 레벨 집합으로 표현되고, 레벨 집합의 편미분 방정식을 통해 진화된다. 이 때, 제안된 모델에서는 윤곽선의 진화와 종교를 위해 2차원 가우시안 모델로 표현되는 피부색 정보를 이용한다. 이를 통해 잡음 및 다양한 포즈를 가지는 복잡한 영상에서도 정확한 얼굴 경계선을 얻을 수 있는 강건한 추출 방법이 구현된다. 제안된 방법의 유효성을 평가하기 위해서 다양한 영상에 대해서 실험이 이루어졌으며, 그 결과를 geodesic 활성 윤곽선 모델의 결과와 비교하였다. 실험결과는 제안된 방법의 보다 나은 성능을 보여준다.

Cold sensitivity classification using facial image based on convolutional neural network

  • lkoo Ahn;Younghwa Baek;Kwang-Ho Bae;Bok-Nam Seo;Kyoungsik Jung;Siwoo Lee
    • 대한한의학회지
    • /
    • 제44권4호
    • /
    • pp.136-149
    • /
    • 2023
  • Objectives: Facial diagnosis is an important part of clinical diagnosis in traditional East Asian Medicine. In this paper, we proposed a model to quantitatively classify cold sensitivity using a fully automated facial image analysis system. Methods: We investigated cold sensitivity in 452 subjects. Cold sensitivity was determined using a questionnaire and the Cold Pattern Score (CPS) was used for analysis. Subjects with a CPS score below the first quartile (low CPS group) belonged to the cold non-sensitivity group, and subjects with a CPS score above the third quartile (high CPS group) belonged to the cold sensitivity group. After splitting the facial images into train/validation/test sets, the train and validation set were input into a convolutional neural network to learn the model, and then the classification accuracy was calculated for the test set. Results: The classification accuracy of the low CPS group and high CPS group using facial images in all subjects was 76.17%. The classification accuracy by sex was 69.91% for female and 62.86% for male. It is presumed that the deep learning model used facial color or facial shape to classify the low CPS group and the high CPS group, but it is difficult to specifically determine which feature was more important. Conclusions: The experimental results of this study showed that the low CPS group and the high CPS group can be classified with a modest level of accuracy using only facial images. There was a need to develop more advanced models to increase classification accuracy.

범용 USB PC 카메라를 이용한 얼굴 특징점의 추적 (Facial Feature Tracking from a General USB PC Camera)

  • 양정석;이칠우
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 2001년도 가을 학술발표논문집 Vol.28 No.2 (2)
    • /
    • pp.412-414
    • /
    • 2001
  • In this paper, we describe an real-time facial feature tracker. We only used a general USB PC Camera without a frame grabber. The system has achieved a rate of 8+ frames/second without any low-level library support. It tracks pupils, nostrils and corners of the lip. The signal from USB Camera is YUV 4:2:0 vertical Format. we converted the signal into RGB color model to display the image and We interpolated V channel of the signal to be used for extracting a facial region. and we analysis 2D blob features in the Y channel, the luminance of the image with geometric restriction to locate each facial feature within the detected facial region. Our method is so simple and intuitive that we can make the system work in real-time.

  • PDF