• Title/Summary/Keyword: Facial Color Model

Search Result 72, Processing Time 0.024 seconds

Extraction of Facial Region and features Using Snakes in Color Image (Snakes 알고리즘을 이용한 얼굴영역 및 특징추출)

  • 김지희;민경필;전준철
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.04b
    • /
    • pp.496-498
    • /
    • 2001
  • Snake 모델(active contour model)은 초기값을 설정해주면 자동으로 임의의 물체의 윤곽을 찾아내는 알고리즘으로 영상에서 특정 영역을 분할하여 할 때 많이 이용되고 있다. 본 논문에서는 칼라 영상에서 얼굴과 얼굴의 특징점을 찾는 방법으로 이 알고리즘을 적용한다. 특히, 주어진 영상의 RGB 값을 정규화(normalization) 해주는 전처리 과정을 통해 얼굴의 특징점 후보 영역을 얻어내는 초기 값을 설정해주어야 하는 과정을 생략해주고 보다 정확한 값을 얻을 수 있도록 구현한다. RGB 값을 이용한 정규화 과정을 적용한 방법과 적용하지 않은 방법을 구현한 결과를 비교해줌으로써, 정규화 과정을 거친 방법의 성능이 더 우수함을 보여준다.

  • PDF

A Differences in Preference and Evaluation on the Image of Make-up (Part II) -Focused on Perceiver's Age & Habitant- (화장색 이미지평가와 선호도 차이 (제2보) -지각자의 연령과 거주지를 중심으로-)

  • Lee Yon-Hee
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.30 no.5 s.153
    • /
    • pp.684-698
    • /
    • 2006
  • This study consists of the stimuli of a female model in her twenties with twenty-two different facial make-up. The subjects of this study are one thousand low hundred ninety seven purposive sampled-male and female grown-ups throughout the country. The period of the research was the December of 2004, one month, and the materials were analyzed by factor analysis, T-examination, analysis of variance, Cronbach's a, Duncan's Multiple Range Test. Here follows the result of the research. Firstly, Familiarity, Intelligence, Fitness, Charm, Tradition and Youth were came out as the result of factor analysis of make-up color image perception. Secondly, in age/lip color perception of bright skin tone, there was difference of Intelligence and Charm. In age/image make-up perception of bright skin tone, there was difference of Familiarity, Charm especially on Cool image make-up. Thirdly in habitant/lip color perception of dark skin tone, there was difference of Intelligence and Charm. In habitant/image make-up perception of bright skin tone, there was difference of Familiarity, Charm and of bright skin tone, Intelligence, Charm, Tradition and Youth. Fourthly, there were the interaction effects on the gender of perceivers and lip color and image make-up of perceivers habitant. Lastly, in preference rate, lip color was more affected by age and image make-up were more affected by perceivers habitant.

Adaptive Skin Color Segmentation in a Single Image using Image Feedback (영상 피드백을 이용한 단일 영상에서의 적응적 피부색 검출)

  • Do, Jun-Hyeong;Kim, Keun-Ho;Kim, Jong-Yeol
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.3
    • /
    • pp.112-118
    • /
    • 2009
  • Skin color segmentation techniques have been widely utilized for face/hand detection and tracking in many applications such as a diagnosis system using facial information, human-robot interaction, an image retrieval system. In case of a video image, it is common that the skin color model for a target is updated every frame for the robust target tracking against illumination change. As for a single image, however, most of studies employ a fixed skin color model which may result in low detection rate or high false positive errors. In this paper, we propose a novel method for effective skin color segmentation in a single image, which modifies the conditions for skin color segmentation iteratively by the image feedback of segmented skin color region in a given image.

Effective Detection of Target Region Using a Machine Learning Algorithm (기계 학습 알고리즘을 이용한 효과적인 대상 영역 분할)

  • Jang, Seok-Woo;Lee, Gyungju;Jung, Myunghee
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.5
    • /
    • pp.697-704
    • /
    • 2018
  • Since the face in image content corresponds to individual information that can distinguish a specific person from other people, it is important to accurately detect faces not hidden in an image. In this paper, we propose a method to accurately detect a face from input images using a deep learning algorithm, which is one of the machine learning methods. In the proposed method, image input via the red-green-blue (RGB) color model is first changed to the luminance-chroma: blue-chroma: red-chroma ($YC_bC_r$) color model; then, other regions are removed using the learned skin color model, and only the skin regions are segmented. A CNN model-based deep learning algorithm is then applied to robustly detect only the face region from the input image. Experimental results show that the proposed method more efficiently segments facial regions from input images. The proposed face area-detection method is expected to be useful in practical applications related to multimedia and shape recognition.

Human Head Mouse System Based on Facial Gesture Recognition

  • Wei, Li;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.12
    • /
    • pp.1591-1600
    • /
    • 2007
  • Camera position information from 2D face image is very important for that make the virtual 3D face model synchronize to the real face at view point, and it is also very important for any other uses such as: human computer interface (face mouth), automatic camera control etc. We present an algorithm to detect human face region and mouth, based on special color features of face and mouth in $YC_bC_r$ color space. The algorithm constructs a mouth feature image based on $C_b\;and\;C_r$ values, and use pattern method to detect the mouth position. And then we use the geometrical relationship between mouth position information and face side boundary information to determine the camera position. Experimental results demonstrate the validity of the proposed algorithm and the Correct Determination Rate is accredited for applying it into practice.

  • PDF

Automatic Denoising of 2D Color Face Images Using Recursive PCA Reconstruction (2차원 칼라 얼굴 영상에서 반복적인 PCA 재구성을 이용한 자동적인 잡음 제거)

  • Park Hyun;Moon Young-Shik
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.2 s.308
    • /
    • pp.63-71
    • /
    • 2006
  • Denoising and reconstruction of color images are extensively studied in the field of computer vision and image processing. Especially, denoising and reconstruction of color face images are more difficult than those of natural images because of the structural characteristics of human faces as well as the subtleties of color interactions. In this paper, we propose a denoising method based on PCA reconstruction for removing complex color noise on human faces, which is not easy to remove by using vectorial color filters. The proposed method is composed of the following five steps: training of canonical eigenface space using PCA, automatic extraction of facial features using active appearance model, relishing of reconstructed color image using bilateral filter, extraction of noise regions using the variance of training data, and reconstruction using partial information of input images (except the noise regions) and blending of the reconstructed image with the original image. Experimental results show that the proposed denoising method maintains the structural characteristics of input faces, while efficiently removing complex color noise.

Study on 3D Avatar Face Conversion Method Based on 2D ID Photo (2D증명사진기반 3D아바타 페이스 변환방법에 대한 연구)

  • Kwon-Byong Lee
    • Journal of Digital Policy
    • /
    • v.3 no.4
    • /
    • pp.15-20
    • /
    • 2024
  • This research proposes a technique for real-time face replacement of a 3D avatar based on a 2D profile picture. By utilizing Blender, a 3D authoring tool, we map the facial mesh of an existing avatar to the feature points of a 2D image and perform rigging to implement natural changes according to facial expression variations. Afterward, the model is exported in FBX format and imported into the Unity game engine to establish a real-time rendering environment. Particularly, we apply a material matching technique to accurately represent the texture and color of the facial skin, resulting in a realistic avatar. This research suggests the possibility of generating avatars with a wider variety of facial features through integration with image generation models and is expected to contribute to the advancement of personalized avatar generation technology in metaverse environments.

Building Control Box Attached Monitor based Color Grid Recognition Methods for User Access Authentication

  • Yoon, Sung Hoon;Lee, Kil Soo;Cha, Jae Sang;Khudaybergenov, Timur;Kim, Min Soo;Woo, Deok Gun;Kim, Jeong Uk
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.2
    • /
    • pp.1-7
    • /
    • 2020
  • The secure access the lighting, Heating, ventilation, and air conditioning (HVAC), fire safety, and security control boxes of building facilities is the primary objective of future smart buildings. This paper proposes an authorized user access to the electrical, lighting, fire safety, and security control boxes in the smart building, by using color grid coded optical camera communication (OCC) with face recognition Technologies. The existing CCTV subsystem can be used as the face recognition security subsystem for the proposed approach. At the same time a smart device attached camera can used as an OCC receiver of color grid code for user access authentication data sent by the control boxes to proceed authorization. This proposed approach allows increasing an authorization control reliability and highly secured authentication on accessing building facility infrastructure. The result of color grid code sequence received by the unauthorized person and his face identification allows getting good results in security and gaining effectiveness of accessing building facility infrastructure. The proposed concept uses the encoded user access authentication information through control box monitor and the smart device application which detect and decode the color grid coded informations combinations and then send user through the smart building network to building management system for authentication verification in combination with the facial features that gives a high protection level. The proposed concept is implemented on testbed model and experiment results verified for the secured user authentication in real-time.

Unconstrained e-Book Control Program by Detecting Facial Characteristic Point and Tracking in Real-time (얼굴의 특이점 검출 및 실시간 추적을 이용한 e-Book 제어)

  • Kim, Hyun-Woo;Park, Joo-Yong;Lee, Jeong-Jick;Yoon, Young-Ro
    • Journal of Biomedical Engineering Research
    • /
    • v.35 no.2
    • /
    • pp.14-18
    • /
    • 2014
  • This study is about e-Book program based on human-computer interaction(HCI) system for physically handicapped person. By acquiring background knowledge of HCI, we know that if we use vision-based interface we can replace current computer input devices by extracting any characteristic point and tracing it. We decided betweeneyes as a characteristic point by analyzing facial input image using webcam. But because of three-dimensional structure of glasses, the person who is wearing glasses wasn't suitable for tracing between-eyes. So we changed characteristic point to the bridge of the nose after detecting between-eyes. By using this technique, we could trace rotation of head in real-time regardless of glasses. To test this program's usefulness, we conducted an experiment to analyze the test result on actual application. Consequently, we got 96.5% rate of success for controlling e-Book under proper condition by analyzing the test result of 20 subjects.

Makeup transfer by applying a loss function based on facial segmentation combining edge with color information (에지와 컬러 정보를 결합한 안면 분할 기반의 손실 함수를 적용한 메이크업 변환)

  • Lim, So-hyun;Chun, Jun-chul
    • Journal of Internet Computing and Services
    • /
    • v.23 no.4
    • /
    • pp.35-43
    • /
    • 2022
  • Makeup is the most common way to improve a person's appearance. However, since makeup styles are very diverse, there are many time and cost problems for an individual to apply makeup directly to himself/herself.. Accordingly, the need for makeup automation is increasing. Makeup transfer is being studied for makeup automation. Makeup transfer is a field of applying makeup style to a face image without makeup. Makeup transfer can be divided into a traditional image processing-based method and a deep learning-based method. In particular, in deep learning-based methods, many studies based on Generative Adversarial Networks have been performed. However, both methods have disadvantages in that the resulting image is unnatural, the result of makeup conversion is not clear, and it is smeared or heavily influenced by the makeup style face image. In order to express the clear boundary of makeup and to alleviate the influence of makeup style facial images, this study divides the makeup area and calculates the loss function using HoG (Histogram of Gradient). HoG is a method of extracting image features through the size and directionality of edges present in the image. Through this, we propose a makeup transfer network that performs robust learning on edges.By comparing the image generated through the proposed model with the image generated through BeautyGAN used as the base model, it was confirmed that the performance of the model proposed in this study was superior, and the method of using facial information that can be additionally presented as a future study.