DOI QR코드

DOI QR Code

극사실적 메타휴먼을 위한 3D 볼류메트릭 캡쳐 기반의 동적 페이스 제작

3D Volumetric Capture-based Dynamic Face Production for Hyper-Realistic Metahuman

  • 오문석 (광운대학교 미디어커뮤니케이션학부) ;
  • 한규훈 (오모션 주식회사) ;
  • 서영호 (광운대학교 전자재료공학과)
  • 투고 : 2022.08.09
  • 심사 : 2022.08.22
  • 발행 : 2022.09.30

초록

디지털 기술의 발전에 따라 메타버스가 콘텐츠 시장의 주요 트렌드로 자리하면서 고품질의 3D(dimension) 모델을 생성하는 기술에 대한 수요가 급증하고 있다. 이에 따라 디지털 휴먼으로 대표되는 고품질 3D 가상 인간의 제작과 관련된 다양한 기술적 시도가 이루어지고 있다. 3D 볼류메트릭 캡처는 기존의 3D 모델 생성 방식보다 빠르고 정밀한 3D 인체모형을 생성할 수 있는 기술로 각광 받고 있다. 본 연구에서는 볼류메트릭 3D 및 4D 모델 생성 분야에서 적용되는 기술들과 콘텐츠 제작의 애로사항에 대해 실제 사례를 바탕으로 3D 고정밀 페이셜 제작 기술의 분석을 시도한다. 그리고 실제 볼류메트릭 캡처를 통한 모델의 구현 사례를 바탕으로 3D 가상인간의 얼굴 제작에 대한 기술들을 고찰하고 효율적인 휴먼 페이셜 생성을 위한 그래픽스 파이프라인을 활용하여 새로운 메타휴먼을 제작하였다.

With the development of digital graphics technology, the metaverse has become a significant trend in the content market. The demand for technology that generates high-quality 3D (dimension) models is rapidly increasing. Accordingly, various technical attempts are being made to create high-quality 3D virtual humans represented by digital humans. 3D volumetric capture is spotlighted as a technology that can create a 3D manikin faster and more precisely than the existing 3D model creation method. In this study, we try to analyze 3D high-precision facial production technology based on practical cases of the difficulties in content production and technologies applied in volumetric 3D and 4D model creation. Based on the actual model implementation case through 3D volumetric capture, we considered techniques for 3D virtual human face production and producted a new metahuman using a graphics pipeline for an efficient human facial generation.

키워드

과제정보

This research is supported by Ministry of Culture, Sports and Tourism(MCST) and Korea Creative Content Agency(KOCCA) in the Culture Technology(CT) Research & Developement Program 2022. The present Research has been conducted by the Research Grant of Kwangwoon University in 2022.

참고문헌

  1. Lee, S.-L, "The Meanings of Fashion on the Social Media of Virtual Influencer Lil Miquela," Journal of Digital Convergence, 19(9), pp. 323-333, 2021. doi: https://doi.org/10.14400/JDC.2021.19.9.323
  2. S. Hwang and M.-C. Lee, "Analysis of the Value Change of Virtual Influencers as Seen in the Press and Social Media Using Text Mining," The Korean Journal of Advertising and Public Relations, c23(4), pp.265-299, 2021.
  3. N. Kumar, A. Narang and B. Lall, "Zero-Shot Normalization Driven Multi-Speaker Text to Speech Synthesis," IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 30, pp. 1679-1693, 2022, doi: https://doi.org/10.1109/TASLP.2022.3169634.
  4. D. Websdale, S. Taylor and B. Milner, "Speaker-Independent Speech Animation Using Perceptual Loss Functions and Synthetic Data," IEEE Transactions on Multimedia, vol. 24, pp. 2539-2552, 2022, doi: https://doi.org/10.1109/TMM.2021.3087020.
  5. Darragh Higgins, Katja Zibrek, Joao Cabral, Donal Egan, Rachel McDonnell, "Sympathy for the digital: Influence of synthetic voice on affinity, social presence and empathy for photorealistic virtual humans," Computers & Graphics, Volume 104, pp. 116-128, ISSN 0097-8493, 2022. doi: https://doi.org/10.1016/j.cag.2022.03.009.
  6. B. Mones and S. Friedman, "Veering around the Uncanny Valley: Revealing the underlying structure of facial expressions," 2011 IEEE International Conference on Automatic Face & Gesture Recognition (FG), pp. 345-345, 2011. doi: https://doi.org/10.1109/FG.2011.5771423.
  7. S. Rackovic, C. Soares, D. Jakovetic, Z. Desnica and R. Ljubobratovic, "Clustering of the Blendshape Facial Model," 2021 29th European Signal Processing Conference (EUSIPCO), pp. 1556-1560, 2021. doi: https://doi.org/10.23919/EUSIPCO54536.2021.9616061.
  8. F. Danieau et al., "Automatic Generation and Stylization of 3D Facial Rigs," 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 784-792, 2019. doi: https://doi.org/10.1109/VR.2019.8798208.
  9. Young-Ho Seo, Moonseok Oh, Gyu-Hoon Han, "The present and future of the digital human," Broadcasting and Media Magazine, 26(4), pp. 72-81, 2021.
  10. M.Oh, G.-H. Han and Y.-H. Seo, "A Study on the Production Techniques of Digital Humans and Metahuman for Metaverse," Design Research 6, no.3, pp. 133-142, June, 2021. doi: https://doi.org/10.46248/kidrs.2021.3.133
  11. M. Oh, G.-H. Han, S.-G. Park and Y.-H. Seo. "A study on analysis of graphics pipeline for 4D volumetric capturing," Design Research 6, no.3, 2021 : 9-18. doi: https://doi.org/10.46248/kidrs.2021.3.9
  12. Y.-H. Seo, "Volumetric Photorealistic 4D Image Technology", Broadcasting and Media Magazine, 26(2), pp. 56-66, April, 2021.
  13. B. Egger et al, "3D Morphable Face Models - Past, Present and Future," arXivLabs, Cornell University, 2020. https://doi.org/10.48550/arXiv.1909.01815
  14. F. Liu, L. Tran, X. Liu., "3D Face Modeling From Diverse Raw Scan Data," arXivLabs, Cornell University, 2019.
  15. B. Moghaddam, J. Lee, H. Pfister and Raghu Machiraju, "Model-based 3D face capture with shape-from-silhouettes," 2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443), pp. 20-27, 2003. doi: https://doi.org/10.1109/AMFG.2003.1240819