• Title/Summary/Keyword: 3D 얼굴모델

Search Result 144, Processing Time 0.027 seconds

A Study on the Feature Point Extraction and Image Synthesis in the 3-D Model Based Image Transmission System (3차원 모델 기반 영상전송 시스템에서의 특징점 추출과 영상합성 연구)

  • 배문관;김동호;정성환;김남철;배건성
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.17 no.7
    • /
    • pp.767-778
    • /
    • 1992
  • Is discussed. A method to extract feature points and to synthesize human facial images In 3-Dmodel-based ceding system, faciai feature points are extracted automatically using some image processing techniques and the known knowledge for human face. A wire frame model matched to human face Is transformed according to the motion of point using the extracted feature points. The synthesized Image Is produced by mapping the texture of initial front view Image onto the trarnsformed wire frame. Experinent results show that the synthesitzed image appears with little unnaturalness.

  • PDF

Object Recognition Face Detection With 3D Imaging Parameters A Research on Measurement Technology (3D영상 객체인식을 통한 얼굴검출 파라미터 측정기술에 대한 연구)

  • Choi, Byung-Kwan;Moon, Nam-Mee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.10
    • /
    • pp.53-62
    • /
    • 2011
  • In this paper, high-tech IT Convergence, to the development of complex technology, special technology, video object recognition technology was considered only as a smart - phone technology with the development of personal portable terminal has been developed crossroads. Technology-based detection of 3D face recognition technology that recognizes objects detected through the intelligent video recognition technology has been evolving technologies based on image recognition, face detection technology with through the development speed is booming. In this paper, based on human face recognition technology to detect the object recognition image processing technology is applied through the face recognition technology applied to the IP camera is the party of the mouth, and allowed the ability to identify and apply the human face recognition, measurement techniques applied research is suggested. Study plan: 1) face model based face tracking technology was developed and applied 2) algorithm developed by PC-based measurement of human perception through the CPU load in the face value of their basic parameters can be tracked, and 3) bilateral distance and the angle of gaze can be tracked in real time, proved effective.

Automatic Mask Generation for 3D Makeup Simulation (3차원 메이크업 시뮬레이션을 위한 자동화된 마스크 생성)

  • Kim, Hyeon-Joong;Kim, Jeong-Sik;Choi, Soo-Mi
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.397-402
    • /
    • 2008
  • 본 논문에서는 햅틱 인터랙션 기반의 3차원 가상 얼굴 메이크업 시뮬레이션에서 메이크업 대상에 대한 정교한 페인팅을 적용하기 위한 자동화된 마스크 생성 방법을 개발한다. 본 연구에서는 메이크업 시뮬레이션 이전의 전처리 과정에서 마스크를 생성한다. 우선, 3차원 스캐너 장치로부터 사용자의 얼굴 텍스쳐 이미지와 3차원 기하 표면 모델을 획득한다. 획득된 얼굴 텍스쳐 이미지로부터 AdaBoost 알고리즘, Canny 경계선 검출 방법과 색 모델 변환 방법 등의 영상처리 알고리즘들을 적용하여 마스크 대상이 되는 주요 특정 영역(눈, 입술)들을 결정하고 얼굴 이미지로부터 2차원 마스크 영역을 결정한다. 이렇게 생성된 마스크 영역 이미지는 3차원 표면 기하 모델에 투영되어 최종적인 3차원 특징 영역의 마스크를 레이블링하는데 사용된다. 이러한 전처리 과정을 통하여 결정된 마스크는 햅틱 장치와 스테레오 디스플레이기반의 가상 인터페이스를 통해서 자연스러운 메이크업 시뮬레이션을 수행하는데 사용된다. 본 연구에서 개발한 방법은 사용자에게 전처리 과정에서의 어떠한 개입 없이 자동적으로 메이크업 대상이 되는 마스크 영역을 결정하여 정교하고 손쉬운 메이크업 페인팅 인터페이스를 제공한다.

  • PDF

Automatic Anticipation Generation for 3D Facial Animation (3차원 얼굴 표정 애니메이션을 위한 기대효과의 자동 생성)

  • Choi Jung-Ju;Kim Dong-Sun;Lee In-Kwon
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.1
    • /
    • pp.39-48
    • /
    • 2005
  • According to traditional 2D animation techniques, anticipation makes an animation much convincing and expressive. We present an automatic method for inserting anticipation effects to an existing facial animation. Our approach assumes that an anticipatory facial expression can be found within an existing facial animation if it is long enough. Vertices of the face model are classified into a set of components using principal components analysis directly from a given hey-framed and/or motion -captured facial animation data. The vortices in a single component will have similar directions of motion in the animation. For each component, the animation is examined to find an anticipation effect for the given facial expression. One of those anticipation effects is selected as the best anticipation effect, which preserves the topology of the face model. The best anticipation effect is automatically blended with the original facial animation while preserving the continuity and the entire duration of the animation. We show experimental results for given motion-captured and key-framed facial animations. This paper deals with a part of broad subject an application of the principles of traditional 2D animation techniques to 3D animation. We show how to incorporate anticipation into 3D facial animation. Animators can produce 3D facial animation with anticipation simply by selecting the facial expression in the animation.

Realtime Facial Expression Data Tracking System using Color Information (컬러 정보를 이용한 실시간 표정 데이터 추적 시스템)

  • Lee, Yun-Jung;Kim, Young-Bong
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.7
    • /
    • pp.159-170
    • /
    • 2009
  • It is very important to extract the expression data and capture a face image from a video for online-based 3D face animation. In recently, there are many researches on vision-based approach that captures the expression of an actor in a video and applies them to 3D face model. In this paper, we propose an automatic data extraction system, which extracts and traces a face and expression data from realtime video inputs. The procedures of our system consist of three steps: face detection, face feature extraction, and face tracing. In face detection, we detect skin pixels using YCbCr skin color model and verifies the face area using Haar-based classifier. We use the brightness and color information for extracting the eyes and lips data related facial expression. We extract 10 feature points from eyes and lips area considering FAP defined in MPEG-4. Then, we trace the displacement of the extracted features from continuous frames using color probabilistic distribution model. The experiments showed that our system could trace the expression data to about 8fps.

Real-Time Face Extraction using Color Information based Region Segment and Symmetry Technique (실시간 얼굴 특징 점 추출을 위한 색 정보 기반의 영역분할 및 영역 대칭 기법)

  • 최승혁;김재경;박준;최윤철
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.10b
    • /
    • pp.721-723
    • /
    • 2004
  • 최근 가상환경에서 아바타의 활용이 빠르게 증가하면서 아바타 애니메이션에 대한 연구가 활발히 진행되고 있다. 특히 아바타의 사람과 같은 자연스러운 얼굴 애니메이션(Facial Animation)은 사용자에게 아바타가 살아 있는 듯한 느낌(Life-likeness)과 사실감(Believability)을 심어주어 보다 친숙한 인터페이스로 활용될 수 있다. 이러한 얼굴 애니메이션 생성을 위해 얼굴의 특징 점을 추출하는 기법이 끊임없이 이루어져 왔다. 그러나 지금까지의 연구는 실시간으로 사람 얼굴로부터 모션을 생성하고 이를 바로 3D 얼굴 모델에 적용 및 모션 라이브러리를 구축하기 위한 최적화된 알고리즘 개발에 대한 연구가 미흡하였다. 본 논문은 실제 사랑 얼굴 모델로부터 실시간으로 특징 점 인식을 통한 애니메이션 적용 및 라이브러리 생성 기법에 대친 제안한다. 제안 기법에서는 빠르고 정확한 특징 점 추출을 위하여 색 정보를 가공하여 얼굴 영역을 추출해내고 이를 영역 분할하여 필요한 특징 점을 추출하였으며, 자연스러운 모션 생성을 위하여 에러 발생 시 대칭점을 이용한 복구 알고리즘을 개발하였다. 본 논문에서는 이와 같은 색 정보 기반의 영역분할 및 영역 대칭 기법을 제시하여 실시간으로 끊김이 없고 자연스러운 얼굴 모션 라이브러리를 생성 및 적용하였다.

  • PDF

System for the Hierarchical Face Plastic Surgery Using the Facial 3D Models (얼굴 3D모델을 이용한 계층적 얼굴성형 시스템)

  • 신승철;조은규;유건수;박상운;최창석
    • Proceedings of the IEEK Conference
    • /
    • 2003.07d
    • /
    • pp.1657-1660
    • /
    • 2003
  • 7 This paper offer to the system for the hierarchical face plastic surgery using of 3D models. For the system, Make hierarchical plastic object of facial 3D models, and special appointment setting of plastic object. In order to give variaty to a scale, type, angle, position of plastic object that developed plastic surgery solution. It is possible to plastic surgery that harmonize with plastic objects, solution, vector by selected user.

  • PDF

Generating Face Textures for 3D Avatars from Photos (실사 영상을 사용한 3차원 아바타 얼굴 텍스쳐 생성)

  • Kim, Dong-Hee;Yoon, Jong-Hyun;Park, Jong-Seung
    • Journal of Korea Game Society
    • /
    • v.8 no.1
    • /
    • pp.49-58
    • /
    • 2008
  • In this paper, we propose a texture generation scheme for 3D avatars from three or more human face photos. First, we manually mark image positions corresponding to vertices of a given UVW map. Then, a face texture is automatically generated from the photo images. The proposed texture generation scheme extremely reduces the amount of manual work compared with the classical methods such as Photoshop-based schemes. The generated textures are photorealistic since the textures fully reflect the naturalness of the original photos. The texture creation scheme can be applied to any kind of mesh structures of 3D models and mesh structures need not be changed to accommodate the given textures. We created face textures from several triplets of photos and mapped them to 3D avatar faces. Experimental results showed that visual realism of avatar faces is much enhanced by the face textures.

  • PDF

A Study on Facial Modeling using Implicit Primitive (음함수 프리미티브를 이용한 얼굴모델링에 대한 연구)

  • Lee Hyun-Cheol;Song Yong-Kyu;Kim Eun-Seok;Hur Gi-Taek
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2005.11a
    • /
    • pp.466-469
    • /
    • 2005
  • Recently, in computer graphics, researches on 3D animations have been very active. One of the important research areas in 3D animation is animation of human being. Implicit surface model is convient for modeling objects composed of complicated surface such as 3D characters and liquids. Moreover, it can represent various forms of surfaces using a relatively small amount of data. In this paper, we propose a method of facial model generation using Implicit Primitive.

  • PDF

Feature Extraction Method of 2D-DCT for Facial Expression Recognition (얼굴 표정인식을 위한 2D-DCT 특징추출 방법)

  • Kim, Dong-Ju;Lee, Sang-Heon;Sohn, Myoung-Kyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.3
    • /
    • pp.135-138
    • /
    • 2014
  • This paper devices a facial expression recognition method robust to overfitting using 2D-DCT and EHMM algorithm. In particular, this paper achieves enhanced recognition performance by setting up a large window size for 2D-DCT feature extraction and extracting the observation vectors of EHMM. The experimental results on the CK facial expression database and the JAFFE facial expression database showed that the facial expression recognition accuracy was improved according as window size is large. Also, the proposed method revealed the recognition accuracy of 87.79% and showed enhanced recognition performance ranging from 46.01% to 50.05% in comparison to previous approaches based on histogram feature, when CK database is employed for training and JAFFE database is used to test the recognition accuracy.