• Title/Summary/Keyword: Facial color

Search Result 326, Processing Time 0.031 seconds

Facial Region Detection Using Facial Color Histogram & information of Edge (얼굴 칼라 히스토그램과 에지 정보를 이용한 얼굴 영역 검출)

  • 이정봉;박장춘
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2002.10d
    • /
    • pp.592-594
    • /
    • 2002
  • 얼굴 영역 검출의 수행 방법으로 개선된 얼굴 칼라 히스토그램과 에지 정보를 결합한 검출 시스템을 제안한다. 배경이 복잡한 영상에서 사람의 얼굴 영역과 배경 영역이 얼굴 영역과 비슷한 칼라 분포를 가지는 물체를 포함하는 영상이더라도 강인한 추출이 가능하도록 하였다. 본 논문에서는 효율적인 얼굴 검출을 위하여 얼굴의 칼라 분포를 얼굴 칼라의 확률 히스토그램으로 모델링하고 에지 정보와 reconstruction에 의한 형태학적 필터링(morphological filtering)을 적용하여 얼굴 후보 영역을 검출한다. 검출된 후보 영역에서 얼굴 구성 요소간의 위치 관계를 이용하여 눈동자와 흰자위의 명도차 특성으로 눈 영역의 위치를 추정하고 상대적인 위치 관계로 입 영역을 추정하여 얼굴 구성 요소의 정보를 얻어서이 요소 정보가 존재하는 후보 영역들이 최종적으로 얼굴 영역으로 판단되어 검출된다. 제안한 방법을 여러 영상에 이용하여 좋은 결과를 얻을 수 있었다.

  • PDF

Skin Condition Analysis of Facial Image using Smart Device: Based on Acne, Pigmentation, Flush and Blemish

  • Park, Ki-Hong;Kim, Yoon-Ho
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.8 no.2
    • /
    • pp.47-58
    • /
    • 2018
  • In this paper, we propose a method for skin condition analysis using a camera module embedded in a smartphone without a separate skin diagnosis device. The type of skin disease detected in facial image taken by smartphone is acne, pigmentation, blemish and flush. Face features and regions were detected using Haar features, and skin regions were detected using YCbCr and HSV color models. Acne and flush were extracted by setting the range of a component image hue, and pigmentation was calculated by calculating the factor between the minimum and maximum value of the corresponding skin pixel in the component image R. Blemish was detected on the basis of adaptive thresholds in gray scale level images. As a result of the experiment, the proposed skin condition analysis showed that skin diseases of acne, pigmentation, blemish and flush were effectively detected.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

Quantitative Analysis of Face Color according to Health Status of Four Constitution Types for Korean Elderly Male (고연령 한국 남성의 사상 체질별 건강 수준에 따른 안색의 정량적 분석)

  • Do, Jun-Hyeong;Ku, Bon-Cho;Kim, Jang-Woong;Jang, Jun-Su;Kim, Sang-Gil;Kim, Keun-Ho;Kim, Jong-Yeol
    • Journal of Physiology & Pathology in Korean Medicine
    • /
    • v.26 no.1
    • /
    • pp.128-132
    • /
    • 2012
  • In this paper, we performed a quantitative analysis of face color according to the health status of four constitution types. 205 Korean male in age from 65 to 80 were participated in this study and 85 subjects were finally selected for the analysis. Imaging process techniques were employed to extract feature variables associated with face color from a frontal facial image. Using the extracted feature variables, the correlations between face color and health status, face color and health status in each constitution type, and face color and four constitution types in heath status group were investigated. As the result, it was observed that the face color of healthy group contained more red component and less blue component than unhealthy group. For each constitution type, the face parts showing a significant difference according to health status were different. This is the first work which reports the correlation between the face color and health status of four constitution types with a objective method, and the numerical data for the face color according to the health status of four constitution types will be an objective standard to diagnose a patient's health status.

A User Authentication System Using Face Analysis and Similarity Comparison (얼굴 분석과 유사도 비교를 이용한 사용자 인증 시스템)

  • Ryu Dong-Yeop;Yim Young-Whan;Yoon Sunnhee;Seo Jeong Min;Lee Chang Hoon;Lee Keunsoo;Lee Sang Moon
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.11
    • /
    • pp.1439-1448
    • /
    • 2005
  • In this paper, after similarity of color information in above toro and geometry position analysis of important characteristic information in face and abstraction object that is inputted detects face area using comparison, describe about method to do user certification using ratio information and hair spring degree. Face abstraction algorithm that use color information has comparative advantages than face abstraction algorithm that use form information because have advantage that is not influenced facial degree or site etc. that tip. Because is based on color information, change of lighting or to keep correct performance because is sensitive about color such as background similar to complexion is difficult. Therefore, can be used more efficiently than method to use color information as that detect characteristic information of eye and lips etc. that is facial importance characteristic element except color information and similarity for each object achieves comparison. This paper proposes system that eye and mouth's similarity that calculate characteristic that is ratio red of each individual after divide face by each individual and is segmentalized giving weight in specification calculation recognize user confirming similarity through search. Could experiment method to propose and know that the awareness rate through analysis with the wave rises.

  • PDF

COLORIMETRIC ANALYSIS OF EXTRACTED HUMAN TEETH AND FIVE SHADE GUIDES (발거된 자연치와 5종 Shade Guide의 색채 계측기를 이용한 색상 비교)

  • Hwang, In-Nam;Oh, Won-Man
    • Restorative Dentistry and Endodontics
    • /
    • v.22 no.2
    • /
    • pp.769-781
    • /
    • 1997
  • The tristimulus values of 180 extracted maxillary and mandibular anterior teeth were measured by colorimeter. And it were colnverted to Munsell color order system(Hue, Value, Chroma) and CIE $L^*a^*b^*$ color coordinates. The commonly used Vita, and Bioform shade guides, 2 composite resin shade guides(Prisma APH and Z-100), and a glass-ionomer shade guide(Fuji II) were compared with these teeth. At the middle facial surface, color distributions of teeth were Hue(0.56YR to 9.77Y), Value(2.46 to 7.9), and Chroma(0.14 to 2.02). And the aberaged values and standard deviations for $L^*a^*b^*$ were $63.18{\pm}10.44$, $1.11{\pm}1.66$, and $5.79{\pm}2.36$. The shade guide did not match well with the color space of the human teeth. Especially, the lacks of the Yellow-red Hues and higher values were prominent. Compare with other measurements, the Hues of the teeth measured in this study were broadly distributed(most of ranges in Y and YR were included), while the Value and chroma were shown to the lower.

  • PDF

Study of the most frequent natural tooth colors in the Spanish population using spectrophotometry

  • Gomez-Polo, Cristina;Gomez-Polo, Miguel;Martinez Vazquez de Parga, Juan Antonio;Celemin Vinuela, Alicia
    • The Journal of Advanced Prosthodontics
    • /
    • v.7 no.6
    • /
    • pp.413-422
    • /
    • 2015
  • PURPOSE. To identify the most frequent natural tooth colors using the Easyshade Compact (Vita -Zahnfabrik) spectrophotometer on a sample of the Spanish population according to the 3D Master System. MATERIALS AND METHODS. The middle third of the facial surface of natural maxillary central incisors was measured with an Easyshade Compact spectrophotometer (Vita Zahnfabrik) in 1361 Caucasian Spanish participants aged between 16 and 89 years. Natural tooth color was recorded using the 3D Master System nomenclature. The program used for the present descriptive statistical analysis of the results was SAS 9.1.3. RESULTS. The results show that the most frequent dental color in the total sample studied is 3M1 (7.05%), followed by the intermediate shade 1M1.5 (6.91%) and 2L1.5 (6.02%). CONCLUSION. According to the research methodology used, and taking into account the limitations of this study, it can be proposed that the most frequent color among the Spanish population is 3M1; the most common lightness group is 2; the most frequent hue group according to the 3D Master System is M and the most frequent chroma group is 1.5.

Deep Learning based Color Restoration of Corrupted Black and White Facial Photos (딥러닝 기반 손상된 흑백 얼굴 사진 컬러 복원)

  • Woo, Shin Jae;Kim, Jong-Hyun;Lee, Jung;Song, Chang-Germ;Kim, Sun-Jeong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.2
    • /
    • pp.1-9
    • /
    • 2018
  • In this paper, we propose a method to restore corrupted black and white facial images to color. Previous studies have shown that when coloring damaged black and white photographs, such as old ID photographs, the area around the damaged area is often incorrectly colored. To solve this problem, this paper proposes a method of restoring the damaged area of input photo first and then performing colorization based on the result. The proposed method consists of two steps: BEGAN (Boundary Equivalent Generative Adversarial Networks) model based restoration and CNN (Convolutional Neural Network) based coloring. Our method uses the BEGAN model, which enables a clearer and higher resolution image restoration than the existing methods using the DCGAN (Deep Convolutional Generative Adversarial Networks) model for image restoration, and performs colorization based on the restored black and white image. Finally, we confirmed that the experimental results of various types of facial images and masks can show realistic color restoration results in many cases compared with the previous studies.

Detection Method of Human Face, Facial Components and Rotation Angle Using Color Value and Partial Template (컬러정보와 부분 템플릿을 이용한 얼굴영역, 요소 및 회전각 검출)

  • Lee, Mi-Ae;Park, Ki-Soo
    • The KIPS Transactions:PartB
    • /
    • v.10B no.4
    • /
    • pp.465-472
    • /
    • 2003
  • For an effective pre-treatment process of a face input image, it is necessary to detect each of face components, calculate the face area, and estimate the rotary angle of the face. A proposed method of this study can estimate an robust result under such renditions as some different levels of illumination, variable fate sizes, fate rotation angels, and background color similar to skin color of the face. The first step of the proposed method detects the estimated face area that can be calculated by both adapted skin color Information of the band-wide HSV color coordinate converted from RGB coordinate, and skin color Information using histogram. Using the results of the former processes, we can detect a lip area within an estimated face area. After estimating a rotary angle slope of the lip area along the X axis, the method determines the face shape based on face information. After detecting eyes in face area by matching a partial template which is made with both eyes, we can estimate Y axis rotary angle by calculating the eye´s locations in three dimensional space in the reference of the face area. As a result of the experiment on various face images, the effectuality of proposed algorithm was verified.

Automatic Denoising of 2D Color Face Images Using Recursive PCA Reconstruction (2차원 칼라 얼굴 영상에서 반복적인 PCA 재구성을 이용한 자동적인 잡음 제거)

  • Park Hyun;Moon Young-Shik
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.2 s.308
    • /
    • pp.63-71
    • /
    • 2006
  • Denoising and reconstruction of color images are extensively studied in the field of computer vision and image processing. Especially, denoising and reconstruction of color face images are more difficult than those of natural images because of the structural characteristics of human faces as well as the subtleties of color interactions. In this paper, we propose a denoising method based on PCA reconstruction for removing complex color noise on human faces, which is not easy to remove by using vectorial color filters. The proposed method is composed of the following five steps: training of canonical eigenface space using PCA, automatic extraction of facial features using active appearance model, relishing of reconstructed color image using bilateral filter, extraction of noise regions using the variance of training data, and reconstruction using partial information of input images (except the noise regions) and blending of the reconstructed image with the original image. Experimental results show that the proposed denoising method maintains the structural characteristics of input faces, while efficiently removing complex color noise.