• Title/Summary/Keyword: 3D Skin Color Model

Search Result 16, Processing Time 0.023 seconds

Integrated 3D Skin Color Model for Robust Skin Color Detection of Various Races (강건한 다인종 얼굴 검출을 위한 통합 3D 피부색 모델)

  • Park, Gyeong-Mi;Kim, Young-Bong
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.5
    • /
    • pp.1-12
    • /
    • 2009
  • The correct detection of skin color is an important preliminary process in fields of face detection and human motion analysis. It is generally performed by three steps: transforming the pixel color to a non-RGB color space, dropping the illuminance component of skin color, and classifying the pixels by the skin color distribution model. Skin detection depends on by various factors such as color space, presence of the illumination, skin modeling method. In this paper we propose a 3d skin color model that can segment pixels with several ethnic skin color from images with various illumination condition and complicated backgrounds. This proposed skin color model are formed with each components(Y, Cb, Cr) which transform pixel color to YCbCr color space. In order to segment the skin color of several ethnic groups together, we first create the skin color model of each ethnic group, and then merge the skin color model using its skin color probability. Further, proposed model makes several steps of skin color areas that can help to classify proper skin color areas using small training data.

Skin Color Region Segmentation using classified 3D skin (계층화된 3차원 피부색 모델을 이용한 피부색 분할)

  • Park, Gyeong-Mi;Yoon, Ga-Rim;Kim, Young-Bong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.8
    • /
    • pp.1809-1818
    • /
    • 2010
  • In order to detect the skin color area from input images, many prior researches have divided an image into the pixels having a skin color and the other pixels. In a still image or videos, it is very difficult to exactly extract the skin pixels because lighting condition and makeup generate a various variations of skin color. In this thesis, we propose a method that improves its performance using hierarchical merging of 3D skin color model and context informations for the images having various difficulties. We first make 3D color histogram distributions using skin color pixels from many YCbCr color images and then divide the color space into 3 layers including skin color region(Skin), non-skin color region(Non-skin), skin color candidate region (Skinness). When we segment the skin color region from an image, skin color pixel and non-skin color pixels are determined to skin region and non-skin region respectively. If a pixel is belong to Skinness color region, the pixels are divided into skin region or non-skin region according to the context information of its neighbors. Our proposed method can help to efficiently segment the skin color regions from images having many distorted skin colors and similar skin colors.

A Realtime Expression Control for Realistic 3D Facial Animation (현실감 있는 3차원 얼굴 애니메이션을 위한 실시간 표정 제어)

  • Kim Jung-Gi;Min Kyong-Pil;Chun Jun-Chul;Choi Yong-Gil
    • Journal of Internet Computing and Services
    • /
    • v.7 no.2
    • /
    • pp.23-35
    • /
    • 2006
  • This work presents o novel method which extract facial region und features from motion picture automatically and controls the 3D facial expression in real time. To txtract facial region and facial feature points from each color frame of motion pictures a new nonparametric skin color model is proposed rather than using parametric skin color model. Conventionally used parametric skin color models, which presents facial distribution as gaussian-type, have lack of robustness for varying lighting conditions. Thus it needs additional work to extract exact facial region from face images. To resolve the limitation of current skin color model, we exploit the Hue-Tint chrominance components and represent the skin chrominance distribution as a linear function, which can reduce error for detecting facial region. Moreover, the minimal facial feature positions detected by the proposed skin model are adjusted by using edge information of the detected facial region along with the proportions of the face. To produce the realistic facial expression, we adopt Water's linear muscle model and apply the extended version of Water's muscles to variation of the facial features of the 3D face. The experiments show that the proposed approach efficiently detects facial feature points and naturally controls the facial expression of the 3D face model.

  • PDF

Spectrum-Based Color Reproduction Algorithm for Makeup Simulation of 3D Facial Avatar

  • Jang, In-Su;Kim, Jae Woo;You, Ju-Yeon;Kim, Jin Seo
    • ETRI Journal
    • /
    • v.35 no.6
    • /
    • pp.969-979
    • /
    • 2013
  • Various simulation applications for hair, clothing, and makeup of a 3D avatar can provide more useful information to users before they select a hairstyle, clothes, or cosmetics. To enhance their reality, the shapes, textures, and colors of the avatars should be similar to those found in the real world. For a more realistic 3D avatar color reproduction, this paper proposes a spectrum-based color reproduction algorithm and color management process with respect to the implementation of the algorithm. First, a makeup color reproduction model is estimated by analyzing the measured spectral reflectance of the skin samples before and after applying the makeup. To implement the model for a makeup simulation system, the color management process controls all color information of the 3D facial avatar during the 3D scanning, modeling, and rendering stages. During 3D scanning with a multi-camera system, spectrum-based camera calibration and characterization are performed to estimate the spectrum data. During the virtual makeup process, the spectrum data of the 3D facial avatar is modified based on the makeup color reproduction model. Finally, during 3D rendering, the estimated spectrum is converted into RGB data through gamut mapping and display characterization.

Facial Features and Motion Recovery using multi-modal information and Paraperspective Camera Model (다양한 형식의 얼굴정보와 준원근 카메라 모델해석을 이용한 얼굴 특징점 및 움직임 복원)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.9B no.5
    • /
    • pp.563-570
    • /
    • 2002
  • Robust extraction of 3D facial features and global motion information from 2D image sequence for the MPEG-4 SNHC face model encoding is described. The facial regions are detected from image sequence using multi-modal fusion technique that combines range, color and motion information. 23 facial features among the MPEG-4 FDP (Face Definition Parameters) are extracted automatically inside the facial region using color transform (GSCD, BWCD) and morphological processing. The extracted facial features are used to recover the 3D shape and global motion of the object using paraperspective camera model and SVD (Singular Value Decomposition) factorization method. A 3D synthetic object is designed and tested to show the performance of proposed algorithm. The recovered 3D motion information is transformed into global motion parameters of FAP (Face Animation Parameters) of the MPEG-4 to synchronize a generic face model with a real face.

Realtime Facial Expression Data Tracking System using Color Information (컬러 정보를 이용한 실시간 표정 데이터 추적 시스템)

  • Lee, Yun-Jung;Kim, Young-Bong
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.7
    • /
    • pp.159-170
    • /
    • 2009
  • It is very important to extract the expression data and capture a face image from a video for online-based 3D face animation. In recently, there are many researches on vision-based approach that captures the expression of an actor in a video and applies them to 3D face model. In this paper, we propose an automatic data extraction system, which extracts and traces a face and expression data from realtime video inputs. The procedures of our system consist of three steps: face detection, face feature extraction, and face tracing. In face detection, we detect skin pixels using YCbCr skin color model and verifies the face area using Haar-based classifier. We use the brightness and color information for extracting the eyes and lips data related facial expression. We extract 10 feature points from eyes and lips area considering FAP defined in MPEG-4. Then, we trace the displacement of the extracted features from continuous frames using color probabilistic distribution model. The experiments showed that our system could trace the expression data to about 8fps.

Research of Quantitative Modeling that Classify Personal Color Skin Tone (퍼스널 컬러 스킨 톤 유형 분류의 정량적 평가 모델 구축에 대한 연구)

  • Kim, Yong Hyeon;Oh, Yu Seok;Lee, Jung Hoon
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.42 no.1
    • /
    • pp.121-132
    • /
    • 2018
  • Recent beauty trends focus on suitability to individual features. A personal color system is a recent aesthetic concept that influences color make up and coordination. However, a personal color concept has several weaknesses. For example, type classification is qualitative and not quantitative because its measuring system is a sensory test with no industry standard of personal color system. A quantitative personal color type classification model is the purpose of this study, which can be a solution to above problems. This model is a kind of mapping system in a 3D Cartesian coordinate system which has own axes, Value, Saturation, and Yellowness. The cheek color of the individual sample is also independent variable and personal color type is a dependent variable. In order to construct the model, this study conducted a colorimetric survey on a 993 sampling frequency of Korean women in their 20s and 30s. The significance of this study is as follows. First, through this study, personal color system is established on quantitative color space; in addition, the model has flexibility and scalability because it consisted of independent axis that allows for the inclusion of any other critical variable in the form of variable axis.

Facial Color Control based on Emotion-Color Theory (정서-색채 이론에 기반한 게임 캐릭터의 동적 얼굴 색 제어)

  • Park, Kyu-Ho;Kim, Tae-Yong
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.8
    • /
    • pp.1128-1141
    • /
    • 2009
  • Graphical expressions are continuously improving, spurred by the astonishing growth of the game technology industry. Despite such improvements, users are still demanding a more natural gaming environment and true reflections of human emotions. In real life, people can read a person's moods from facial color and expression. Hence, interactive facial colors in game characters provide a deeper level of reality. In this paper we propose a facial color adaptive technique, which is a combination of an emotional model based on human emotion theory, emotional expression pattern using colors of animation contents, and emotional reaction speed function based on human personality theory, as opposed to past methods that expressed emotion through blood flow, pulse, or skin temperature. Experiments show this of expression of the Facial Color Model based on facial color adoptive technique and expression of the animation contents is effective in conveying character emotions. Moreover, the proposed Facial Color Adaptive Technique can be applied not only to 2D games, but to 3D games as well.

  • PDF

Comparative modeling of human tyrosinase - An important target for developing skin whitening agents (사람 티로시나제의 3차원 구조 상동 모델링)

  • Choi, Jong-Keun;Suh, Joo-Won
    • Proceedings of the KAIS Fall Conference
    • /
    • 2012.05a
    • /
    • pp.182-186
    • /
    • 2012
  • human tyrosinase (hTyr) catalyzes first and the rate limiting step in the synthesis of polymerized pigment, melanin which determines skin, hair and eye colors. Mutation of hTyr often brings about decrease of melanin production and further albinism. Meanwhile, a number of cosmetic companies providing skincare products for woman in Asia-Pacific region have tried to develop inhibitors to bright skin color for several decades. In this study, we built a 3D structure by comparative modeling technique based on the crystal structure of tyrosinase from bacillus megaterium as a template to serve structural information of hTyr. According to our model and sequence analysis of type 3 copper protein family proteins, two copper atoms of active site located deep inside are coordinated with six strictly conserved histidine residues coming from four-helix-bundle. Cavity which accommodates substrates was like funnel shape of which entrance was wide and expose to solvent. In addition, protein-substrate and protein-inhibitor complex were modeled with the guide of van der waals surface generated by in house software. Our model suggested that only phenol group or its analogs can fill the binding site near nuclear copper center because inside of binding site has narrow shape relatively. In conclusion, the results of this study may provide helpful information for designing and screening new anti-melanogensis agents.

  • PDF

Research on Methods to Increase Recognition Rate of Korean Sign Language using Deep Learning

  • So-Young Kwon;Yong-Hwan Lee
    • Journal of Platform Technology
    • /
    • v.12 no.1
    • /
    • pp.3-11
    • /
    • 2024
  • Deaf people who use sign language as their first language sometimes have difficulty communicating because they do not know spoken Korean. Deaf people are also members of society, so we must support to create a society where everyone can live together. In this paper, we present a method to increase the recognition rate of Korean sign language using a CNN model. When the original image was used as input to the CNN model, the accuracy was 0.96, and when the image corresponding to the skin area in the YCbCr color space was used as input, the accuracy was 0.72. It was confirmed that inserting the original image itself would lead to better results. In other studies, the accuracy of the combined Conv1d and LSTM model was 0.92, and the accuracy of the AlexNet model was 0.92. The CNN model proposed in this paper is 0.96 and is proven to be helpful in recognizing Korean sign language.

  • PDF