• Title/Summary/Keyword: 얼굴 색상

Search Result 254, Processing Time 0.025 seconds

Detection Method of Human Face, Facial Components and Rotation Angle Using Color Value and Partial Template (컬러정보와 부분 템플릿을 이용한 얼굴영역, 요소 및 회전각 검출)

  • Lee, Mi-Ae;Park, Ki-Soo
    • The KIPS Transactions:PartB
    • /
    • v.10B no.4
    • /
    • pp.465-472
    • /
    • 2003
  • For an effective pre-treatment process of a face input image, it is necessary to detect each of face components, calculate the face area, and estimate the rotary angle of the face. A proposed method of this study can estimate an robust result under such renditions as some different levels of illumination, variable fate sizes, fate rotation angels, and background color similar to skin color of the face. The first step of the proposed method detects the estimated face area that can be calculated by both adapted skin color Information of the band-wide HSV color coordinate converted from RGB coordinate, and skin color Information using histogram. Using the results of the former processes, we can detect a lip area within an estimated face area. After estimating a rotary angle slope of the lip area along the X axis, the method determines the face shape based on face information. After detecting eyes in face area by matching a partial template which is made with both eyes, we can estimate Y axis rotary angle by calculating the eye´s locations in three dimensional space in the reference of the face area. As a result of the experiment on various face images, the effectuality of proposed algorithm was verified.

Facial Features and Motion Recovery using multi-modal information and Paraperspective Camera Model (다양한 형식의 얼굴정보와 준원근 카메라 모델해석을 이용한 얼굴 특징점 및 움직임 복원)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.9B no.5
    • /
    • pp.563-570
    • /
    • 2002
  • Robust extraction of 3D facial features and global motion information from 2D image sequence for the MPEG-4 SNHC face model encoding is described. The facial regions are detected from image sequence using multi-modal fusion technique that combines range, color and motion information. 23 facial features among the MPEG-4 FDP (Face Definition Parameters) are extracted automatically inside the facial region using color transform (GSCD, BWCD) and morphological processing. The extracted facial features are used to recover the 3D shape and global motion of the object using paraperspective camera model and SVD (Singular Value Decomposition) factorization method. A 3D synthetic object is designed and tested to show the performance of proposed algorithm. The recovered 3D motion information is transformed into global motion parameters of FAP (Face Animation Parameters) of the MPEG-4 to synchronize a generic face model with a real face.

Fuzzy-Model-based Emotion Recognition Using Advanced Face Detection (향상된 얼굴 인식 기술을 이용한 퍼지 모델 기반의 감성인식)

  • Yoo, Tae-Il;Kim, Kwang-Bae;Joo, Young-Hoon
    • Proceedings of the KIEE Conference
    • /
    • 2006.07d
    • /
    • pp.2083-2084
    • /
    • 2006
  • 본 논문에서는 조명에 변화에 강인하고 기존의 퍼지 색상 필터보다 정확하고 빠른 얼굴 감지 알고리즘 이용하여 얼굴을 인식하고 얼굴로부터 특징점(눈, 눈썹, 입)틀을 추출하고 추출된 특징점을 이용하여 감성을 판별하는 방법을 제안한다. 향상된 얼굴 인식 기술이란 퍼지 색상 필터의 단점이 영상의 크기와 성능에 따라 처리속도가 느려지는 것을 보완하기 위하여 최소한의 규칙을 사용하여 얼굴 후보 영역을 선별 적용하여 얼굴영역을 추출하는 기법을 말한다. 이렇게 추출된 얼굴영역에서 감정이 변화 할 때 가장 두드러지게 변화를 나타내는 눈, 눈썹 그리고 입의 특징점을 이용하여 감성을 분류한다.

  • PDF

Lip Detection Algorithm Using Color Clustering (색상 군집화를 이용한 입술탐지 알고리즘)

  • Jeong, Jongmyeon;Choi, Jiyun;Seo, Ji Hyuk;Lee, Se Jun
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2012.07a
    • /
    • pp.277-278
    • /
    • 2012
  • 본 논문에서는 색상 군집화를 이용한 입술탐지 알고리즘을 제안한다. 이를 위해 이미 많이 알려져 있는 AdaBoost를 이용한 얼굴탐지를 수행한다. 탐지된 얼굴영역에 Lab 컬러시스템을 적용 시킨 후 입술픽셀의 특징에 따른 색상 마커를 사용하여 피부영역을 추출한다. 추출된 피부영역에 대하여 K-means 색상 군집화를 통해 입술영역을 추출한다. 그리고 실험을 통해 입술탐지 결과를 확인하였다.

  • PDF

A User Authentication System Using Face Analysis and Similarity Comparison (얼굴 분석과 유사도 비교를 이용한 사용자 인증 시스템)

  • Ryu Dong-Yeop;Yim Young-Whan;Yoon Sunnhee;Seo Jeong Min;Lee Chang Hoon;Lee Keunsoo;Lee Sang Moon
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.11
    • /
    • pp.1439-1448
    • /
    • 2005
  • In this paper, after similarity of color information in above toro and geometry position analysis of important characteristic information in face and abstraction object that is inputted detects face area using comparison, describe about method to do user certification using ratio information and hair spring degree. Face abstraction algorithm that use color information has comparative advantages than face abstraction algorithm that use form information because have advantage that is not influenced facial degree or site etc. that tip. Because is based on color information, change of lighting or to keep correct performance because is sensitive about color such as background similar to complexion is difficult. Therefore, can be used more efficiently than method to use color information as that detect characteristic information of eye and lips etc. that is facial importance characteristic element except color information and similarity for each object achieves comparison. This paper proposes system that eye and mouth's similarity that calculate characteristic that is ratio red of each individual after divide face by each individual and is segmentalized giving weight in specification calculation recognize user confirming similarity through search. Could experiment method to propose and know that the awareness rate through analysis with the wave rises.

  • PDF

Automatic Generation of the Personal 3D Face Model (3차원 개인 얼굴 모델 자동 생성)

  • Ham, Sang-Jin;Kim, Hyoung-Gon
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.1
    • /
    • pp.104-114
    • /
    • 1999
  • This paper proposes an efficient method for the automatic generation of personalized 3D face model from color image sequence. To detect a robust facial region in a complex background, moving color detection technique based on he facial color distribution has been suggested. Color distribution and edge position information in the detected face region are used to extract the exact 31 facial feature points of the facial description parameter(FDP) proposed by MPEG-4 SNHC(Synthetic-Natural Hybrid Coding) adhoc group. Extracted feature points are then applied to the corresponding vertex points of the 3D generic face model composed of 1038 triangular mesh points. The personalized 3D face model can be generated automatically in less then 2 seconds on Pentium PC.

  • PDF

A Real-Time Face Region Extraction Using Motion And Color Information (움직임과 색상 정보를 이용한 실시간 얼굴영역 검출에 관한 연구)

  • Park Sung-Jin;Han Sang-Il;Cha Hyung-Tai
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2005.04a
    • /
    • pp.441-445
    • /
    • 2005
  • 얼굴인식기술이 인증 및 보안을 위한 도구로 활용되고 있지만 입력영상의 상태 즉, 복잡한 배경과 조명환경에 따라 적용할 수 있는 범위가 제약적일 수밖에 없다. 본 논문에서는 이러한 제약을 최소화하기 위한 방법과 좀 더 정확한 얼굴 영역 검출을 위한 기법을 제시한다. 제안된 방법은 움직임에 기반 한 에지 차영상을 이용하여 얼굴 윤곽을 검출한 후 이를 X와 Y축의 프로파일을 이용하여 얼굴영역을 예측한다. 그리고 얼굴의 피부 색상 정보와 특징 구성요소인 눈, 코, 입 등의 특징적인 요소의 에지정보를 이용하여 수직적으로 이를 구분한 후 얼굴인지 아닌지를 판별한다. 제안된 알고리즘은 다양한 배경 및 조명등의 많은 환경적 요인에 따른 입력영상에서도 매우 안정적으로 적용됨을 실험을 통해 확인하였다.

  • PDF

A Study on New RGB Space Transformation for Skin Color Detection (새로운 RGB영역 변환을 이용한 Skin Color Detection에 관한 연구)

  • Chung, Won-Serk;Lee, Hyung-Ji;Chung, Jae-Ho
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2000.10b
    • /
    • pp.915-918
    • /
    • 2000
  • 본 논문에서는 색상정보를 이용한 얼굴 검출 알고리즘에 대해 소개하고자 한다. 여러 개의 얼굴 검출에 적용되는 이 알고리즘은 피부색의 학습 과정과 입력영상에 대한 얼굴 검출 과정으로 크게 두 가지로 나눌 수 있다. 특히 본 연구에서는 피부색이 본 논문에서 제안한 새로운 RGB 영역에서 직선을 이루는 특징을 이용하여 학습 data를 구성한다. 이렇게 구성된 data를 입력영상에 적용함으로써 1차 얼굴 후보영역을 결정한다. 그런 후 1차 후보영역을 세로방향과 가로방향으로 투영시킴으로써 최종 얼굴영역을 찾아낸다. 실험을 통해 이 알고리즘은 기존의 색상정보를 이용한 얼굴 검출 방법에 비해 얼굴개수에 상관없이 높은 검출 성공률을 보여주었다.

  • PDF

A Realtime Expression Control for Realistic 3D Facial Animation (현실감 있는 3차원 얼굴 애니메이션을 위한 실시간 표정 제어)

  • Kim Jung-Gi;Min Kyong-Pil;Chun Jun-Chul;Choi Yong-Gil
    • Journal of Internet Computing and Services
    • /
    • v.7 no.2
    • /
    • pp.23-35
    • /
    • 2006
  • This work presents o novel method which extract facial region und features from motion picture automatically and controls the 3D facial expression in real time. To txtract facial region and facial feature points from each color frame of motion pictures a new nonparametric skin color model is proposed rather than using parametric skin color model. Conventionally used parametric skin color models, which presents facial distribution as gaussian-type, have lack of robustness for varying lighting conditions. Thus it needs additional work to extract exact facial region from face images. To resolve the limitation of current skin color model, we exploit the Hue-Tint chrominance components and represent the skin chrominance distribution as a linear function, which can reduce error for detecting facial region. Moreover, the minimal facial feature positions detected by the proposed skin model are adjusted by using edge information of the detected facial region along with the proportions of the face. To produce the realistic facial expression, we adopt Water's linear muscle model and apply the extended version of Water's muscles to variation of the facial features of the 3D face. The experiments show that the proposed approach efficiently detects facial feature points and naturally controls the facial expression of the 3D face model.

  • PDF

Face detection in compressed domain using color balancing for various illumination conditions (다양한 조명 환경에서의 실시간 사용자 검출을 위한 압축 영역에서의 색상 조절을 사용한 얼굴 검출 방법)

  • Min, Hyun-Seok;Lee, Young-Bok;Shin, Ho-Chul;Lim, Eul-Gyoon;Ro, Yong-Man
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.140-145
    • /
    • 2009
  • Significant attention has recently been drawn to human robot interaction system that uses face detection technology. The most conventional face detection methods have applied under pixel domain. These pixel based face detection methods require high computational power. Hence, the conventional methods do not satisfy the robot environment that requires robot to operate in a limited computing process and saving space. Also, compensating the variation of illumination is important and necessary for reliable face detection. In this paper, we propose the illumination invariant face detection that is performed under the compressed domain. The proposed method uses color balancing module to compensate illumination variation. Experiments show that the proposed face detection method can effectively increase the face detection rate under existing illumination.

  • PDF