• Title/Summary/Keyword: Facial capture

Search Result 64, Processing Time 0.023 seconds

Real-time Facial Modeling and Animation based on High Resolution Capture (고해상도 캡쳐 기반 실시간 얼굴 모델링과 표정 애니메이션)

  • Byun, Hae-Won
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.8
    • /
    • pp.1138-1145
    • /
    • 2008
  • Recently, performance-driven facial animation has been popular in various area. In television or game, it is important to guarantee real-time animation for various characters with different appearances between a performer and a character. In this paper, we present a new facial animation approach based on motion capture. For this purpose, we address three issues: facial expression capture, expression mapping and facial animation. Finally, we show the results of various examination for different types of face models.

  • PDF

Real-time Markerless Facial Motion Capture of Personalized 3D Real Human Research

  • Hou, Zheng-Dong;Kim, Ki-Hong;Lee, David-Junesok;Zhang, Gao-He
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.1
    • /
    • pp.129-135
    • /
    • 2022
  • Real human digital models appear more and more frequently in VR/AR application scenarios, in which real-time markerless face capture animation of personalized virtual human faces is an important research topic. The traditional way to achieve personalized real human facial animation requires multiple mature animation staff, and in practice, the complex process and difficult technology may bring obstacles to inexperienced users. This paper proposes a new process to solve this kind of work, which has the advantages of low cost and less time than the traditional production method. For the personalized real human face model obtained by 3D reconstruction technology, first, use R3ds Wrap to topology the model, then use Avatary to make 52 Blend-Shape model files suitable for AR-Kit, and finally realize real-time markerless face capture 3D real human on the UE4 platform facial motion capture, this study makes rational use of the advantages of software and proposes a more efficient workflow for real-time markerless facial motion capture of personalized 3D real human models, The process ideas proposed in this paper can be helpful for other scholars who study this kind of work.

Data-driven Facial Expression Reconstruction for Simultaneous Motion Capture of Body and Face (동작 및 효정 동시 포착을 위한 데이터 기반 표정 복원에 관한 연구)

  • Park, Sang Il
    • Journal of the Korea Computer Graphics Society
    • /
    • v.18 no.3
    • /
    • pp.9-16
    • /
    • 2012
  • In this paper, we present a new method for reconstructing detailed facial expression from roughly captured data with a small number of markers. Because of the difference in the required capture resolution between the full-body capture and the facial expression capture, they hardly have been performed simultaneously. However, for generating natural animation, a simultaneous capture for body and face is essential. For this purpose, we provide a method for capturing the detailed facial expression only with a small number of markers. Our basic idea is to build a database for the facial expressions and apply the principal component analysis for reducing the dimensionality. The dimensionality reduction enables us to estimate the full data from a part of the data. We justify our method by applying it to dynamic scenes to show the viability of the method.

Automatic Synchronization of Separately-Captured Facial Expression and Motion Data (표정과 동작 데이터의 자동 동기화 기술)

  • Jeong, Tae-Wan;Park, Sang-II
    • Journal of the Korea Computer Graphics Society
    • /
    • v.18 no.1
    • /
    • pp.23-28
    • /
    • 2012
  • In this paper, we present a new method for automatically synchronize captured facial expression data with its corresponding motion data. In a usual optical motion capture set-up, a detailed facial expression can not be captured simultaneously in the motion capture session because its resolution requirement is higher than that of the motion capture. Therefore, those are captured in two separate sessions and need to be synchronized in the post-process to be used for generating a convincing character animation. Based on the patterns of the actor's neck movement extracted from those two data, we present a non-linear time warping method for the automatic synchronization. We justify our method with the actual examples to show the viability of the method.

A Study on the Fabrication of Facial Blend Shape of 3D Character - Focusing on the Facial Capture of the Unreal Engine (3D 캐릭터의 얼굴 블렌드쉐입(blendshape)의 제작연구 -언리얼 엔진의 페이셜 캡처를 중심으로)

  • Lou, Yi-Si;Choi, Dong-Hyuk
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.8
    • /
    • pp.73-80
    • /
    • 2022
  • Facial expression is an important means of representing characteristics in movies and animations, and facial capture technology can support the production of facial animation for 3D characters more quickly and effectively. Blendshape techniques are the most widely used methods for producing high-quality 3D face animations, but traditional blendshape often takes a long time to produce. Therefore, the purpose of this study is to achieve results that are not far behind the effectiveness of traditional production to reduce the production period of blend shape. In this paper, in order to make a blend shape, the method of using the cross-model to convey the blend shape is compared with the traditional method of making the blend shape, and the validity of the new method is verified. This study used kit boy developed by Unreal Engine as an experiment target conducted a facial capture test using two blend shape production techniques, and compared and analyzed the facial effects linked to blend shape.

Case Study of Short Animation with Facial Capture Technology Using Mobile

  • Jie, Gu;Hwang, Juwon;Choi, Chulyoung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.3
    • /
    • pp.56-63
    • /
    • 2020
  • The Avengers film produced by Marvel Comics shows visual effects that were impossible to produce in the past. Companies that produce film special effects were initially equipped with large personnel and equipment, but technology is gradually evolving to be feasible for smaller companies that do not have high-priced equipment and a large workforce. The development of hardware and software is becoming increasingly available to the general public as well as to experts. Equipment and software which were difficult for individuals to purchase before quickly popularized high-performance computers as the game industry developed. The development of the cloud has been the driving force behind software costs. As augmented reality (AR) performance of mobile devices improves, advanced technologies such as motion tracking and face recognition technology are no longer implemented by expensive equipment. Under these circumstances, after implementing mobile-based facial capture technology in animation projects, we have identified the pros and the cons and suggest better solutions to improve the problem.

The Multi-marker Tracking for Facial Optical Motion Capture System (Facial Optical Motion Capture System을 위한 다중 마커의 추적)

  • 이문희;김경석
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2000.04a
    • /
    • pp.474-477
    • /
    • 2000
  • 최근 3D 애니메이션 , 영화 특수효과 그리고 게임제작시 모션 캡처 시스템(Motion Capture System)을 통하여 실제 인간의 동작 및 표정을 수치적으로 측정해내어 이를 실제 애니메이션에 직접 사용함으로써 막대한 작업시간 및 인력 드리고 자본을 획기적으로 줄이고 있다. 그러나 기존의 모션 캡처 시스템은 고속 카메라를 이용함으로써 가격이 고가이고 움직임 추적에서도 여러 가지 문제점을 가지고 있다. 본 논문에서는 일반 저가의 카메라와 신경회로망 및 영상처리를 이용하여 얼굴 애니메이션용 모션 캡처 시스템에 적용할 수 있는 경제적이고 효율적인 얼굴 움직임 추적 기법을 제안한다.

  • PDF

Comparative Analysis of Facial Animation Production by Digital Actors - Keyframe Animation and Mobile Capture Animation

  • Choi, Chul Young
    • International journal of advanced smart convergence
    • /
    • v.13 no.3
    • /
    • pp.176-182
    • /
    • 2024
  • Looking at the recent game market, classic games released in the past are being re-released with high-quality visuals, and users are generally satisfied. It can be said that the realization of realistic digital actors, which was not possible in the past, is now becoming a reality. Epic Games launched the MetaHuman Creator website in September 2021, allowing anyone to easily create realistic human characters. Since then, the number of animations created using MetaHumans has been increasing. As the characters become more realistic, the movement and expression animations expected by the audience must also be convincingly realized. Until recently, traditional methods were the primary approach for producing realistic character animations. For facial animation, Epic Games introduced an improved method on the Live Link app in 2023, which provides the highest quality among mobile-based techniques. In this context, this paper compares the results of animation produced using both keyframe facial capture and mobile-based capture. After creating an emotional expression animation with four sentences, the results were compared using Unreal Engine. While the facial capture method is more natural and easier to use, the precise and exaggerated expressions possible with the keyframe method cannot be overlooked, suggesting that a hybrid approach using both methods will likely continue for the foreseeable future.

Direct Retargeting Method from Facial Capture Data to Facial Rig (페이셜 리그에 대한 페이셜 캡처 데이터의 다이렉트 리타겟팅 방법)

  • Cho, Hyunjoo;Lee, Jeeho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.2
    • /
    • pp.11-19
    • /
    • 2016
  • This paper proposes a method to directly retarget facial motion capture data to the facial rig. Facial rig is an essential tool in the production pipeline, which allows helping the artist to create facial animation. The direct mapping method from the motion capture data to the facial rig provides great convenience because artists are already familiar with the use of a facial rig and the direct mapping produces the mapping results that are ready for the artist's follow-up editing process. However, mapping the motion data into a facial rig is not a trivial task because a facial rig typically has a variety of structures, and therefore it is hard to devise a generalized mapping method for various facial rigs. In this paper, we propose a data-driven approach to the robust mapping from motion capture data to an arbitary facial rig. The results show that our method is intuitive and leads to increased productivity in the creation of facial animation. We also show that our method can retarget the expression successfully to non-human characters which have a very different shape of face from that of human.

Applying MetaHuman Facial Animation with MediaPipe: An Alternative Solution to Live Link iPhone.

  • Balgum Song;Arminas Baronas
    • International journal of advanced smart convergence
    • /
    • v.13 no.3
    • /
    • pp.191-198
    • /
    • 2024
  • This paper presents an alternative solution for applying MetaHuman facial animations using MediaPipe, providing a versatile option to the Live Link iPhone system. Our approach involves capturing facial expressions with various camera devices, including webcams, laptop cameras, and Android phones, processing the data for landmark detection, and applying these landmarks in Unreal Engine Blueprint to animate MetaHuman characters in real-time. Techniques such as the Eye Aspect Ratio (EAR) for blink detection and the One Euro Filter for data smoothing ensure accurate and responsive animations. Experimental results demonstrate that our system provides a cost-effective and flexible alternative for iPhone non-users, enhancing the accessibility of advanced facial capture technology for applications in digital media and interactive environments. This research offers a practical and adaptable method for real-time facial animation, with future improvements aimed at integrating more sophisticated emotion detection features.