• Title/Summary/Keyword: 얼굴 모션 캡처

Search Result 9, Processing Time 0.037 seconds

Comparison and Analysis of Motion Capture and Key Animation - Focused on Animation of Countenance - (모션 캡처와 키 애니메이션의 비교분석 - 얼굴표정애니메이션을 중심으로 -)

  • Jang, Wook;Choi, Sung-Kyu;Lee, Tae-Gu
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.4
    • /
    • pp.160-169
    • /
    • 2007
  • Main problem in the domestic motion capture type production is that motion data are used even in the case when the human sensibility is needed. In other words it fails to give human images to the work, and production method only use motion capture data unconditionally and impetuously. Even though motion capture is effective and are various and applicable to various areas, it would cause enormous lose of capital and manual labor if these problems are not solved. In the present study, we compare motion capture with key animation production and analyze the merits and short comings of them. Also, we analyze them through the actual production and present the efficient method of key animation production when the expensive motion capturing devices are not available.

The Multi-marker Tracking for Facial Optical Motion Capture System (Facial Optical Motion Capture System을 위한 다중 마커의 추적)

  • 이문희;김경석
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2000.04a
    • /
    • pp.474-477
    • /
    • 2000
  • 최근 3D 애니메이션 , 영화 특수효과 그리고 게임제작시 모션 캡처 시스템(Motion Capture System)을 통하여 실제 인간의 동작 및 표정을 수치적으로 측정해내어 이를 실제 애니메이션에 직접 사용함으로써 막대한 작업시간 및 인력 드리고 자본을 획기적으로 줄이고 있다. 그러나 기존의 모션 캡처 시스템은 고속 카메라를 이용함으로써 가격이 고가이고 움직임 추적에서도 여러 가지 문제점을 가지고 있다. 본 논문에서는 일반 저가의 카메라와 신경회로망 및 영상처리를 이용하여 얼굴 애니메이션용 모션 캡처 시스템에 적용할 수 있는 경제적이고 효율적인 얼굴 움직임 추적 기법을 제안한다.

  • PDF

The Multi-marker Tracking for Facial Animation (Facial Animation을 위한 다중 마커의 추적)

  • 이문희;김철기;김경석
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2001.06a
    • /
    • pp.553-557
    • /
    • 2001
  • 얼굴 표정을 애니메이션하는 것은 얼굴 구조의 복잡성과 얼굴 표면의 섬세한 움직임으로 인해 컴퓨터 애니메이션 분야에서 가장 어려운 분야로 인식되고 있다. 최근 3D 애니메이션, 영화 특수효과 그리고 게임 제작시 모션 캡처 시스템(Motion Capture System)을 통하여 실제 인간의 동작 및 얼굴 표정을 수치적으로 측정해내어 이를 실제 애니메이션에 직접 사용함으로써 막대한 작업시간 및 인력 그리고 자본을 획기적으로 줄이고 있다. 그러나 기존의 모션 캡처 시스템은 고속 카메라를 이용함으로써 가격이 고가이고 움직임 추적에서도 여러 가지 문제점을 가지고 있다. 본 논문에서는 일반 저가의 카메라와 신경회로망 및 영상처리기법을 이용하여 얼굴 애니메이션용 모션 캡처 시스템에 적응할 수 있는 경제적이고 효율적인 얼굴 움직임 추적기법을 제안한다.

  • PDF

Facial Expression Animation which Applies a Motion Data in the Vector based Caricature (벡터 기반 캐리커처에 모션 데이터를 적용한 얼굴 표정 애니메이션)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.5
    • /
    • pp.90-98
    • /
    • 2010
  • This paper describes methodology which enables user in order to generate facial expression animation of caricature which applies a facial motion data in the vector based caricature. This method which sees was embodied with the plug-in of illustrator. And It is equipping the user interface of separate way. The data which is used in experiment attaches 28 small-sized markers in important muscular part of the actor face and captured the multiple many expression which is various with Facial Tracker. The caricature was produced in the bezier curve form which has a respectively control point from location of the important marker which attaches in the face of the actor when motion capturing to connection with motion data and the region which is identical. The facial motion data compares in the caricature and the spatial scale went through a motion calibration process too because of size. And with the user letting the control did possibly at any time. In order connecting the caricature and the markers also, we did possibly with the click the corresponding region of the caricature, after the user selects each name of the face region from the menu. Finally, this paper used a user interface of illustrator and in order for the caricature facial expression animation generation which applies a facial motion data in the vector based caricature to be possible.

Comparative Analysis of Markerless Facial Recognition Technology for 3D Character's Facial Expression Animation -Focusing on the method of Faceware and Faceshift- (3D 캐릭터의 얼굴 표정 애니메이션 마커리스 표정 인식 기술 비교 분석 -페이스웨어와 페이스쉬프트 방식 중심으로-)

  • Kim, Hae-Yoon;Park, Dong-Joo;Lee, Tae-Gu
    • Cartoon and Animation Studies
    • /
    • s.37
    • /
    • pp.221-245
    • /
    • 2014
  • With the success of the world's first 3D computer animated film, "Toy Story" in 1995, industrial development of 3D computer animation gained considerable momentum. Consequently, various 3D animations for TV were produced; in addition, high quality 3D computer animation games became common. To save a large amount of 3D animation production time and cost, technological development has been conducted actively, in accordance with the expansion of industrial demand in this field. Further, compared with the traditional approach of producing animations through hand-drawings, the efficiency of producing 3D computer animations is infinitely greater. In this study, an experiment and a comparative analysis of markerless motion capture systems for facial expression animation has been conducted that aims to improve the efficiency of 3D computer animation production. Faceware system, which is a product of Image Metrics, provides sophisticated production tools despite the complexity of motion capture recognition and application process. Faceshift system, which is a product of same-named Faceshift, though relatively less sophisticated, provides applications for rapid real-time motion recognition. It is hoped that the results of the comparative analysis presented in this paper become baseline data for selecting the appropriate motion capture and key frame animation method for the most efficient production of facial expression animation in accordance with production time and cost, and the degree of sophistication and media in use, when creating animation.

Direct Retargeting Method from Facial Capture Data to Facial Rig (페이셜 리그에 대한 페이셜 캡처 데이터의 다이렉트 리타겟팅 방법)

  • Cho, Hyunjoo;Lee, Jeeho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.2
    • /
    • pp.11-19
    • /
    • 2016
  • This paper proposes a method to directly retarget facial motion capture data to the facial rig. Facial rig is an essential tool in the production pipeline, which allows helping the artist to create facial animation. The direct mapping method from the motion capture data to the facial rig provides great convenience because artists are already familiar with the use of a facial rig and the direct mapping produces the mapping results that are ready for the artist's follow-up editing process. However, mapping the motion data into a facial rig is not a trivial task because a facial rig typically has a variety of structures, and therefore it is hard to devise a generalized mapping method for various facial rigs. In this paper, we propose a data-driven approach to the robust mapping from motion capture data to an arbitary facial rig. The results show that our method is intuitive and leads to increased productivity in the creation of facial animation. We also show that our method can retarget the expression successfully to non-human characters which have a very different shape of face from that of human.

Comparative Analysis of Linear and Nonlinear Projection Techniques for the Best Visualization of Facial Expression Data (얼굴 표정 데이터의 최적의 가시화를 위한 선형 및 비선형 투영 기법의 비교 분석)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.9
    • /
    • pp.97-104
    • /
    • 2009
  • This paper describes comparison and analysis of methodology which enables us in order to search the projection technique of optimum for projection in the plane. For this methodology, we applies the high-dimensional facial motion capture data respectively in linear and nonlinear projection techniques. The one core element of the methodology is to applies the high-dimensional facial expression data of frame unit in PCA where is a linear projection technique and Isomap, MDS, CCA, Sammon's Mapping and LLE where are a nonlinear projection techniques. And another is to find out the methodology which distributes in this low-dimensional space, and analyze the result last. For this goal, we calculate the distance between the high-dimensional facial expression frame data of existing. And we distribute it in two-dimensional plane space to maintain the distance relationship between the high-dimensional facial expression frame data of existing like that from the condition which applies linear and nonlinear projection techniques. When comparing the facial expression data which distribute in two-dimensional space and the data of existing, we find out the projection technique to maintain the relationship of distance between the frame data like that in condition of optimum. Finally, this paper compare linear and nonlinear projection techniques to projection high-dimensional facial expression data in low-dimensional space and analyze it. And we find out the projection technique of optimum from it.

3D Volumetric Capture-based Dynamic Face Production for Hyper-Realistic Metahuman (극사실적 메타휴먼을 위한 3D 볼류메트릭 캡쳐 기반의 동적 페이스 제작)

  • Oh, Moon-Seok;Han, Gyu-Hoon;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.27 no.5
    • /
    • pp.751-761
    • /
    • 2022
  • With the development of digital graphics technology, the metaverse has become a significant trend in the content market. The demand for technology that generates high-quality 3D (dimension) models is rapidly increasing. Accordingly, various technical attempts are being made to create high-quality 3D virtual humans represented by digital humans. 3D volumetric capture is spotlighted as a technology that can create a 3D manikin faster and more precisely than the existing 3D model creation method. In this study, we try to analyze 3D high-precision facial production technology based on practical cases of the difficulties in content production and technologies applied in volumetric 3D and 4D model creation. Based on the actual model implementation case through 3D volumetric capture, we considered techniques for 3D virtual human face production and producted a new metahuman using a graphics pipeline for an efficient human facial generation.

Phased Visualization of Facial Expressions Space using FCM Clustering (FCM 클러스터링을 이용한 표정공간의 단계적 가시화)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.2
    • /
    • pp.18-26
    • /
    • 2008
  • This paper presents a phased visualization method of facial expression space that enables the user to control facial expression of 3D avatars by select a sequence of facial frames from the facial expression space. Our system based on this method creates the 2D facial expression space from approximately 2400 facial expression frames, which is the set of neutral expression and 11 motions. The facial expression control of 3D avatars is carried out in realtime when users navigate through facial expression space. But because facial expression space can phased expression control from radical expressions to detail expressions. So this system need phased visualization method. To phased visualization the facial expression space, this paper use fuzzy clustering. In the beginning, the system creates 11 clusters from the space of 2400 facial expressions. Every time the level of phase increases, the system doubles the number of clusters. At this time, the positions of cluster center and expression of the expression space were not equal. So, we fix the shortest expression from cluster center for cluster center. We let users use the system to control phased facial expression of 3D avatar, and evaluate the system based on the results.