• Title/Summary/Keyword: 얼굴 변형

Search Result 139, Processing Time 0.028 seconds

A Virtual Makeup Program Using Facial Feature Area Extraction Based on Active Shape Model and Modified Alpha Blending (ASM 기반의 얼굴 특징 영역 추출 및 변형된 알파 블렌딩을 이용한 가상 메이크업 프로그램)

  • Koo, Ja-Myoung;Cho, Tai-Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.8
    • /
    • pp.1827-1835
    • /
    • 2010
  • In this paper, facial feature areas in user picture are created by facial feature points extracted by ASM(Active Shape Model). In a existing virtual make-up application, users manually select a few features that are exactly. Users are uncomfortable with this method. We propose a virtual makeup application using ASM that does not require user input. In order to express a natural makeup, the modified alpha blendings for each cosmetic are used to blend skin color with cosmetic color. The Virtual makeup application was implemented to apply Foundation, Blush, Lip Stick, Lip Liner, Eye Pencil, Eye Liner and Eye Shadow.

Face Recognition using Fisherface Method with Fuzzy Membership Degree (퍼지 소속도를 갖는 Fisherface 방법을 이용한 얼굴인식)

  • 곽근창;고현주;전명근
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.6
    • /
    • pp.784-791
    • /
    • 2004
  • In this study, we deal with face recognition using fuzzy-based Fisherface method. The well-known Fisherface method is more insensitive to large variation in light direction, face pose, and facial expression than Principal Component Analysis method. Usually, the various methods of face recognition including Fisherface method give equal importance in determining the face to be recognized, regardless of typicalness. The main point here is that the proposed method assigns a feature vector transformed by PCA to fuzzy membership rather than assigning the vector to particular class. In this method, fuzzy membership degrees are obtained from FKNN(Fuzzy K-Nearest Neighbor) initialization. Experimental results show better recognition performance than other methods for ORL and Yale face databases.

Vector-based Face Generation using Montage and Shading Method (몽타주 기법과 음영합성 기법을 이용한 벡터기반 얼굴 생성)

  • 박연출;오해석
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.6
    • /
    • pp.817-828
    • /
    • 2004
  • In this paper, we propose vector-based face generation system that uses montage and shading method and preserves designer(artist)'s style. Proposed system generates character's face similar to human face automatically using facial features that extracted from a photograph. In addition, unlike previous face generation system that uses contours, we propose the system is based on color and composes face from facial features and shade extracted from a photograph. Thus, it has advantages that can make more realistic face similar to human face. Since this system is vector-based, the generated character's face has no size limit and constraint. Therefore it is available to transform the shape freely and to apply various facial expressions to 2D face. Moreover, it has distinctiveness with another approaches in point that can keep artist's impression just as it is in result.

Synthesizing Faces of Animation Characters Using a 3D Model (3차원 모델을 사용한 애니메이션 캐릭터 얼굴의 합성)

  • Jang, Seok-Woo;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.8
    • /
    • pp.31-40
    • /
    • 2012
  • In this paper, we propose a method of synthesizing faces of a user and an animation character using a 3D face model. The suggested method first receives two orthogonal 2D face images and extracts major features of the face through the template snake. It then generates a user-customized 3D face model by adjusting a generalized face model using the extracted facial features and by mapping texture maps obtained from two input images to the 3D face model. Finally, it generates a user-customized animation character by synthesizing the generated 3D model to an animation character reflecting the position, size, facial expressions, and rotational information of the character. Experimental results show some results to verify the performance of the suggested algorithm. We expect that our method will be useful to various applications such as games and animation movies.

Face Tracking Using Face Feature and Color Information (색상과 얼굴 특징 정보를 이용한 얼굴 추적)

  • Lee, Kyong-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.11
    • /
    • pp.167-174
    • /
    • 2013
  • TIn this paper, we find the face in color images and the ability to track the face was implemented. Face tracking is the work to find face regions in the image using the functions of the computer system and this function is a necessary for the robot. But such as extracting skin color in the image face tracking can not be performed. Because face in image varies according to the condition such as light conditions, facial expressions condition. In this paper, we use the skin color pixel extraction function added lighting compensation function and the entire processing system was implemented, include performing finding the features of eyes, nose, mouth are confirmed as face. Lighting compensation function is a adjusted sine function and although the result is not suitable for human vision, the function showed about 4% improvement. Face features are detected by amplifying, reducing the value and make a comparison between the represented image. The eye and nose position, lips are detected. Face tracking efficiency was good.

Recognition of Facial Expressions Using Muscle-eased Feature Models (근육기반의 특징모델을 이용한 얼굴표정인식에 관한 연구)

  • 김동수;남기환;한준희;박호식;차영석;최현수;배철수;권오홍;나상동
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 1999.11a
    • /
    • pp.416-419
    • /
    • 1999
  • We Present a technique for recognizing facial expressions from image sequences. The technique uses muscle-based feature models for tracking facial features. Since the feature models are constructed with a small number of parameters and are deformable in the limited range and directions, each search space for a feature can be limited. The technique estimates muscular contractile degrees for classifying six principal facial express expressions. The contractile vectors are obtained from the deformations of facial muscle models. Similarities are defined between those vectors and representative vectors of principal expressions and are used for determining facial expressions.

  • PDF

Feature Extraction of Face Region using YUV Transform (YUV 변환을 이용한 안면 영역의 특징 추출)

  • Chae, Duck-Jae;Choi, Young-Kyoo;Rhee, Sang-Burm
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2002.11a
    • /
    • pp.641-644
    • /
    • 2002
  • 얼굴 특징점 추출은 현재 많은 연구가 활발히 진행되고 있는 분야로 보안, 인식 등 다양한 응용분야를 갖는다. 본 논문에서는 PC 카메라 및 주민등록증에 있는 사진을 스캔하여 얼굴 특징점을 정확하고 빠른 계산 시간안에 찾을 수 있는 새로운 방법을 제시한다. RGB 색공간을 YUV로 변환하여 Y성분을 히스토그램 균등화 시켜 휘도에 관계없이 얼굴 피부색을 추출한 후 YUV의 V성분을 변형한 V'성분을 이용하여 얼굴의 특징점을 찾는 방법이다. 실험결과 주민등록증 사진과 PC 카메라에서 입력 받은 얼굴 영상이 오류 없이 추출됨이 관찰되었다.

  • PDF

Face Feature Extraction for Face Recognition (얼굴 인식을 위한 얼굴 특징점 추출)

  • Yang, Ryong;Chae, Duk-Jae;Lee, Sang-Bum
    • Journal of the Korea Computer Industry Society
    • /
    • v.3 no.12
    • /
    • pp.1765-1774
    • /
    • 2002
  • A face recognition is currently the field which many research have been processed actively. But many problems must be solved the previous problem. First, We must recognize the face of the object taking a location various lighting change and change of the camera into account. In this paper, we proposed that new method to fund feature within fast and correct computation time after scanning PC camera and ID card picture. It converted RGB color space to YUV. A face skin color extracts which equalize a histogram of Y ingredient without the luminance. After, the method use V' ingredient which transformes V ingredient of YUV and then find the face feature. The reult of the experiment shows getting correct input face image from ID Card picture and PC camera.

  • PDF

Designing and Implementing 3D Virtual Face Aesthetic Surgery System (3D 가상 얼굴 성형 제작 시스템 설계 및 구현)

  • Lee, Cheol-Woong;Kim, Il-Min;Cho, Sae-Hong
    • Journal of Digital Contents Society
    • /
    • v.9 no.4
    • /
    • pp.751-758
    • /
    • 2008
  • The purpose of this study is to implement 3D Face Model, which resembles a user, using 3D Graphic techniques. The implemented 3D Face model is used to further study and implement 3D Facial Aesthetic Surgery System, that can be used to increase the satisfaction rate of patient by comparing before and after facial aesthetic surgery. For designing and implementing 3D Facial Aesthetic Surgery System, 3D Modeling, Texture Mapping for skin, Database system for facial data are studied and implemented independently. The Detailed Adjustment System is, also, implemented for reflecting the minute description of face. The implemented 3D Facial Aesthetic Surgery System for this paper shows more accuacy, convenience, and satisfaction in compare with the existing system.

  • PDF

A Face Detection using Pupil-Template from Color Base Image (컬러 기반 영상에서 눈동자 템플릿을 이용한 얼굴영상 추출)

  • Choi, Ji-Young;Kim, Mi-Kyung;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.1
    • /
    • pp.828-831
    • /
    • 2005
  • In this paper we propose a method to detect human faces from color image using pupil-template matching. Face detection is done by three stages. (i)separating skin regions from non-skin regions; (ii)generating a face regions by application of the best-fit ellipse; (iii)detecting face by pupil-template. Detecting skin regions is based on a skin color model. we generate a gray scale image from original image by the skin model. The gray scale image is segmented to separated skin regions from non-skin regions. Face region is generated by application of the best-fit ellipse is computed on the base of moments. Generated face regions are matched by pupil-template. And we detection face.

  • PDF